Research:IT Methodologies

From virtualMV2015wiki
Jump to: navigation, search
 Home   <

Research methods

Action Research

Case study

Yin (1994)[1] a case study is an empirical research that investigates a contemporary phenomenon within its real-life context. Evidence of multiple case studies is more convincing and therefore the study will be regarded as more robust. Yin (2009)[2]

From Nesbit & Martin (2011) [3]

The methodology adopted in the paper is a combination of Case Study Research (Yin, 2009); Action Research (Cohen, Manion & Morrison, 2007)[4] in which changes are made on an ongoing basis; and Design Based Research as described in Barab & Squire (2004)[5], Bell (2004)[6] and Wang & Hannafin (2005)[7] where the emphasis is on conducting experiments in changing contexts where it is not always possible to replicate the context.

Parts of the following discussion taken from Albertyn (2010)[8].

Case studies present stories of real life situations. There are a number of researchers that have used case study research successfully. Three important methodological articles on using the case study method in the IS field are those by Benbasat et al. (1987)[9], Dubé and Paré (2003)[10] and Lee (1989)[11]. An article by Markus (1983) is one of the most cited empirical examples of case study research in Information Systems (Myers, 1997)[12](Needs updating -MV).

Case study methodology is often used in the IS environment to accommodate the applied nature of research in this field. A case study methodology allows for the numerical focus in building the theory - which is required here. Using case study research allows the testing of the theory and has empirical validity. The author of the thesis developed a theory which is new and not previously developed. According to Eisenhardt (1989)[13], case study research is well suited for this scenario (Needs a more current ref -MV).

Bill Gillham, B. (2000) Case study research methods

Design Science

  • Create an artifact and use it in an educational setting.
  • The combination of usage logs, survey data, and interview results can be used to evaluate a design model and related software implementation.
  • (March, & Smith, 1995)[14]
  • “build and evaluate” cycle (Hevner, March, & Park, 2004)[15]

Research models

Technology Acceptance Model (Davis)

Roger's (1995) model for technological innovation

Identifies five critical characteristics of the innovation that influences its adoption:

  1. Relative advantage
  2. Compatibility
  3. Complexity
  4. Trialability
  5. Observability

Web-based Learning Environment Instrument (WEBLEI)

The Web-based Learning Environment Instrument (WEBLEI) (Chang & Fisher, 2003)[16] was developed to gather quantitative data on students’ experience of e-learning systems in tertiary environments (Chandra & Fisher, 2006)[17].


Hammersley and Atkinson (1983)[18] emphasise the researcher’s overt/ covert participation in research subjects’ daily lives for an extended duration, observing, listening to conversations, and collecting data that contributes to the understanding on issues being investigated.

Rybas and Gajjala (2007)[19] cyberethnography (in technology-mediated environments), includes both production and consumption of technological artefacts.


a non-random convenience sample. Participation was voluntary and consent was implied by completion of the questionnaire.


Parts of the following discussion taken from Albertyn (2010)[8].

Validity is the best possible approximation to the truth for any specific scenario or a proposition that has been made. Validity is the measure that is being used in order to lead to valid conclusions or the fact that specific samples enable valid inferences (Trochim, 2006)[20]. There are five types of validity namely content validity, predictive validity, concurrent validity, construct validity and face validity (Burns, 2000[21]; Trochim, 2006[20]). Content validity is whether the content is representative of the whole and whether the sample is adequate. This is determined by expert judgement. This is a non-statistical type of validity. Predictive validity is the desire to predict performance on some other criterion, using assessments or some other technique. The criterion measurement should be reliable. Concurrent validity differs from predictive validity in terms of time only. Concurrent validity has to do with the now. What is occurring now? Construct validity has to do with the explanation of the aspects of human behaviour, refers to all the evidence gathered and determines whether a particular operation of a construct is representative of what is intended by the theoretical account of the construct being measured (Burns, 2000[21]; Trochim, 2006[20]). Content validity therefore is dependent on the theory and uses this to determine if a test is assessing all domains of a specific criterion. This assumes a causal relationship and determines whether the program reflected the construct well and that the measure used reflected the idea of the construct measure. Face validity is concerned with whether the test being conducted appear to measure what was intended to be measured using the examination of the different items (Burns, 2000[21]; Trochim, 2006[20]), while construct validity, in contrast to face validity, is the degree to which the measures used by the researchers relate to the abstract construct being investigated (Smith & Glass, 1987[22]; Yin, 2006[23]). The construct, which would have been defined by the researcher, needed to match the definitions of the concept in the literature. These literature concepts would have to be well established and would need to seen as having authority in the area being investigated (Dey, 1993[24]).

Another method of facilitating validation of the results obtained is to use triangulation. Triangulation is the cross verification from more than two sources. One can apply and combine a number of research methodologies when studying the same outcomes (Benbasat, et al. 1987[9]; Yin, 1994[25](note this ref should be updated to Yin's 2006 book)). This method can be employed in both quantitative (validation) and qualitative (inquiry) studies. The credibility of qualitative analyses can therefore be founded using a method-appropriate strategy (Benbasat, et al. 1987[9]; Yin, 1994[25]). The researchers can aim to limit and overcome the weaknesses, problems and biases that can occur when a single methods is used (Benbasat, et al. 1987[9]; Yin, 1994[25]).

Denzin (2006)[26] identified four types of triangulation: Data triangulation which involves time, space and people; investigator triangulation which is when there are a number of researchers involved; theory triangulation is when more than one theory is used to interpret the phenomenon; and methodological triangulation is when more than one method is used to gather data such as interviews, observations, questionnaires and document gathering.


Burnard’s (1991)[27] thematic content analysis.

  1. Reading to identify main themes,
  2. Re-reading to identify specific loadings and categories, and removing irrelevant material (open coding),
  3. Creating a list by resorting categories and grouping and removing extraneous material
  4. Research findings validated by 2 colleagues, and category list discussed and adjusted.
  5. Transcripts and categories are [re]examined,data is linked to category headings
  6. Transcripts are coded against categories and sub headings,
  7. Themes and findings linked to literature
  8. Respondents are asked to validate and check categories and adjustments made
  9. Write up using categories and referring to transcripts.

Patient Rambe[28] for different ways learners engage : Cognitively 1.proximate (achievement motivated), 2. emergent (social networkers), 3. Distal/ divergent (lurkers), 4. challenged (disagree with what you are doing, do it because they have to), and 5) Acolytes (e.g. someone who follows the textbook)

Using a critical ethnographic method of inquiry, Rambe joined a large Facebook feedback group to study the way the students interacted with each other and with their lecturers."You might be surprised to learn that there were some students who were reserved and withdrawn in class, but very vocal online." And different categories emerged. There were the

  1. 'cognitively proximate', or 'trailblazers' - a highly active and well-networked group that used a diversified range of social networking platforms, not just Facebook, to find information. For this group the role of the lecturer for access to and validation of information was increasingly diminished.Then there were the
  2. 'cognitively emergent' group who used Facebook only to socialise, and the
  3. 'cognitively distal' groups who were there because it was a course "requirement", but saw no real value to it;
  4. ... and finally the
  5. 'disciples or acolytes' group, who used Facebook only to acquire what the lecturer had provided.

These last three groups failed to leverage the balance of power, and actually increased their dependence on the lecturer for academic support.

Icon References.png References

  1. Yin, R.K. (1994) Case study research Design and Methods. 2nd edition, Sage publication. Thousand Oaks.
  2. Yin, R.K. (2009). Case Study Research: Design and Methods: Sage Publications
  3. eLearning: A Solution in a Crisis: Don’t Forget the Pedagogy.The 2nd annual conference of Computing and Information Technology Research and Education New Zealand (CITRENZ2011) incorporating the 23rd Annual Conference of the National Advisory Committee on Computing Qualifications, Rotorua, New Zealand, July 5-8. Samuel Mann and Michael Verhaart (Eds).
  4. Cohen, L., Manion, L. & Morrison, K. (2007). Research methods in education. Psychology Press
  5. Barab, S. & Squire, K. (2004). Design-based research: Putting a stake in the ground. The Journal of the Learning Sciences, 13(1), 1-14
  6. Bell, P. (2004). On the theoretical breadth of designbased research in education. Educational Psychologist
  7. Wang, F. and Hannafin, M.J. (2005). Design-based research and technology-enhanced learning environments. Journal of Educational Technology Research and Development Vol 53(4) 5-23
  8. 8.0 8.1 Albertyn, F.A. (2010) thesis
  9. 9.0 9.1 9.2 9.3 Benbasat, I., Goldstein, D.K. and Mead, K. (1987), The case research strategy in studies of information systems, MIS Quarterly, September 1987, 369-386.
  10. Dubé, L., and Paré, G. (2003). Rigor in Information Systems Positivist Case Research: Current Practices, Trends, and Recommendations, MIS Quarterly (27:4) 2003, pp 597-636.
  11. Lee, A.S. (1989). A Scientific Methodology for MIS Case Studies. MIS Quarterly (13:1), 1989, pp. 33-52.
  12. Myers, M.D. (ed) (1997) Qualitative Research in Information Systems: References on Case Study Research. MSIQ discovery. Retrieved on 20 January 2010 from,%20A.S.%20A%20Scientific
  13. Eisenhardt, K. M. (1989). Building Theories from Case Study Research. Academy of Management. The Academy of Management Review 14(4): 532-550.
  14. March, S.T., & Smith G. F. (1995). Design & Natural Science Research on Information Technology. Decision Support Systems, 15, 251-266.
  15. Hevner, A.R., March S.T., & Park J. (2004). Design Science in Information Systems Research. MIS Quarterly vol. 28
  16. Chang, V. & Fisher, D. L. (2003) The validation and application of a new learning environment instrument for online learning in higher education. In Technology rich learning environments: A future perspective. 1-20. FISHER, D.L. and KHINE, M. S. (eds). Singapore, World Scientific Publishing.
  17. Chandra, V. & Fisher, D. L. (2006) Assessing the effectiveness of a blended web-based learning environment in an Australian high school. In Contemporary approaches to research on learning environments: Worldviews.461 – 478. FISHER, D.L. and KHINE, M. S. (eds). Singapore, World Scientific Publishing Co. Pte. Ltd.
  18. Hammersley, M. & Atkinson, P. (1983). Ethnography: principles in practice. Tavistock: London.
  19. Rybas, N. & Gajjala, R. (2007). Developing Cyberethnographic research methods for understanding digitally mediated identities [33 paragraphs]. Forum Qualitative Sozialforschung /Forum: Qualitative Social Research, 8(3), Art. 35, Retrieved May 12, 2011, from
  20. 20.0 20.1 20.2 20.3 Trochim, W.M.K. (2006) Research Methods Knowledge Base. 4th ed. Pearson Publications.
  21. 21.0 21.1 21.2 Burns, R.B. (2000). Introduction to research methods. 4th ed. Sage Publications. London.
  22. Smith, M. & Glass, G. (1987) Research and Evaluation in education and the Social Sciences. New Jersey: Prentice Hall
  23. Yin, R. K. (2006). Case study methods. Handbook of Complementary Methods in Education Research. G. C. Judith L. Green, Patricia B. Elmore Routledge: 896
  24. Dey, I (1993) Qualitative data analysis: a user-friendly guide for social scientists. London: New York, NY : Routledge.
  25. 25.0 25.1 25.2 Yin, R. (1994). Case study research: Design and methods (2nd ed.). Beverly Hills, CA: Sage Publishing.
  26. Denzin, N. (2006). Sociological Methods: A Sourcebook. Aldine Transaction. ISBN 9780-202308401. (5th edition).
  27. Burnard, P., (1991). A method of analysing interview transcripts in qualitative research. Nurse Education Today, 11, 464–466.
  28. Social networking loosens up traditional modes of instruction. Retrieved August 14, 2011 from

Research:IT Methodologies. (2018). In virtualMV's ( Michael Verhaart ) wiki. Retrieved December 16, 2018, from    (zotero)