Annotated Bibliography

This was delivered as part of the Instructional Design and E-Authoring Module.  A downloadable version is here > Annotated bibliography Padraig McDonagh

Introduction

This annotated bibliography has been created to show some of the research that has helped to inform and support me in the creation of our e-learning resource.  Although there were more papers and guides used than the ones shown, these provide a good representation of the breadth of subject matter which was taken into consideration.  Looking back, it is interesting to see how my research started with the elementary task of resource design, focused tightly on video and screencast techniques, and then “zoomed” back out to ensure good synergy between design and learning.  Taken in this regard, the annotated bibliography has provided me with an audit trail of my own thought process on the project.

PAPER ONE

Granic, A., & Cukusic, M. (2011). Usability Testing and Expert Inspections Complemented by Educational Evaluation: A Case Study of an e-Learning Platform. Educational Technology & Society14(2), 107-123.

The authors of this paper set out to analyse the design of a Europe-wide online learning platform.  The platform was part of a large project which attempted to implement a standard e-learning platform in several countries.  This platform could be used for teaching any subject and was intended to act as an E-Learning platform, an M-learning platform, and a knowledge repository.

In the study the methods of analysis were both quantitative and qualitative and attempted to assess the system from a number of perspectives.  Quantitatively, some of the techniques they used were setting a number of different tasks for both students and teachers.  They not only measured if the task could be done, but also how long it took to complete.  The tests were done using students and teachers from a number of different countries which could be useful for highlighting cultural differences.  The qualitative analysis took the form of satisfaction surveys, which highlighted a number of areas of concern in particular from the teaching staff.

There were two aspects of the paper which I found particularly interesting, the analysis of task timings and an attempt to measure the pedagogical effectiveness of certain facilities within the system. The timings and task completions is something that would be useful for our project as tasks which take too long or cannot be completed easily may dissuade our target audience.  Secondly, the paper used a scale for measuring pedagogical content was attractive to me because as an Educator I feel it is important to build resources which provide a good basis for learning.  This should not only improve the learning of users of the system but also the learning of the team.

PAPER TWO

Oehrli, J. A., Piacentine, J., Peters, A., & Nanamaker, B. (2011, March). Do screencasts really work? Assessing student learning through instructional screencasts. In ACRL 2011 Conference Proceedings (Vol. 30, pp. 127-44). Chandos.

This paper attempted to critically assess how much students learn from screencasts, and what are the best practices for creating screencasts.  The study was carried on a group of students who had to use screencasts to help them improve search techniques on the College electronic library system. In consultation with the librarians, they created three inter-related tasks and then created screencasts to guide students through those tasks.  They also talked to instructional designers who gave them guidance on creating screencasts that explained the purpose of the search, and then showed how to conduct the search.

They conducted pre-tests and post tests and recorded success and failure rates in both. They did not let students who failed the pre-test task one to progress to further tasks to prevent “contamination” of the post task tests.

Following the study they were able to produce results that suggested that screencasts would indeed help to facilitate student searches for information in the library and had helped with the learning.  Two issues I had with the study were the small sample size (Fifteen students, not all of whom were new to the College) and the use of vouchers to incentivise participation.  If one is working with such a small group then using gifts endanger the integrity of the study as subjects may be investing extra effort in return for the rewards.

For our project the paper is good because it shows that screencasts can work.  It shows that we need to ensure that the purpose of the screencast is emphasised as well as the learning objective that is being addressed.  Finally it helps to validate our choice of technology for use with students and assessment.

PAPER THREE

O’Farrell, C. (2002). Enhancing student learning through assessment.

This is a handbook produced by Dublin Institute of Technology which provides a comprehensive framework for educators to enhance their own assessment strategies.  It provides guidelines on the elements should be used as part of a course assessment strategy and how they can be used to aid student learning.  It gives educators an easy-to-use set of checklists to ensure that all the relevant areas are being covered and that assessments will contribute to the learning experience.  It draws on a wealth of resources to lead the educator that the why or assessment, what should be assessed, and how should it be done.

The guide is very well written and addresses common errors in assessment creation and marking.  It takes the educator through assessment design, making it more effective and more interesting, and critically encourages the educator to reflect on assessments to improve future teaching.

For our project it is a wonderful resource as our project centres around assessment.  Whilst it is easy to concentrate on the mechanics of the applications we are proposing, it is important to recognise that they should serve as tools to help us assess, not as the focus for the assessment design.  The guide also helped us to formulate a strategy to assess the users of the resource to ensure that they take away the skills that they need to use in assessment.  It provided some good lists which we could use to check our assessments inside the resource and adjust if necessary.

PAPER FOUR

Powell, L. M., & Wimmer, H. (2014). Evaluating the Effectiveness of Self-Created Student Screencasts as a Tool to Increase Student Learning Outcomes in a Hands-On Computer Programming Course. In Proceedings of the Information Systems Educators Conference ISSN (Vol. 2167, p. 1435).

This study set out to assess screencasts as a method of note-taking and revision for learner studying computer programming in third-level education.  The study sought to measure test scores and assignment results between students who used screencasts and those who did not.  The study was conducted over four semesters and alternated groups in an attempt to compare results equally.

The study used an online screencast creation tool to record practical classes and lab sessions but excluded lectures as these were not considered suitable for the application.  Students were encouraged to record their own actions as the lecturer performed tasks on screen and to review these sessions after class.  They were also encouraged to record their actions of their own work outside of class time and to use this as a study aid.

The results showed a significant improvement in grades between the groups who used screencasts and those who did not.  It proposed that students who recorded their actions were able to recall easier what they had been thinking during those sessions and use it when it came to assessment time.  It also noted that screencasting was an appropriate method of note-taking when using applications such as computer programming environments where the interface can be complex and difficult to explain in words.

This study fits perfectly with the technical assessment element of our project as it matches exactly what we are proposing to us screencasting for i.e. assessing students using computer programs and subsequently using it as a study aid.  It highlights the ease of using screencasting with students both technical and non-technical (the study group were business students) and the value of having students create their own revision material.

PAPER FIVE

Morain, M., & Swarts, J. (2012). YouTutorial: A framework for assessing instructional online video. Technical communication quarterly21(1), 6-24.

This study investigated online videos from YouTube as a method of instruction and attempted to document the differences between poor, average and good videos.  It used a number of criteria to “grade” the videos such as production quality, sound, confidence and content.  The authors began by looking for videos which could be classed as instructional as opposed to entertainment or other purposes.

They searched for videos with high, low and medium numbers of views.  They then measured the correlation between number of views and average ratings. At the time YouTube was using a number of stars between one and four for viewers to leave ratings, this has since changed to a thumbs-up and thumbs-down mechanism reminiscent of the days of the Roman Coliseum.  Finally, the study measured how the good, average, and poor videos  compared against rubrics created the team.  These rubrics were created using recommendations from literature (Clark & Mayer 2008, Bandura 1977) on some effective methods of production and instruction.

The study was able to identify videos which were high in production value, focused in objective and conveyed a sense of confidence received higher ratings than those which were average and poor in those same categories.  They also validated some of their findings by comparing their results to rating from subject matter and instructional experts who they employed to rank some of the same videos independently.

This study was very helpful for our project as it helps to inform the advice we will be giving to educators on creating effective videos.  It also helps to inform our team on standards which should be adhered to if we create any videos of our own as part of the resource.  It was a reasonably straightforward paper, easy to read, and I liked the validation of results rather than just relying on the knowledge or expertise of the YouTube community to identify good videos.

PAPER SIX

Sweller, J. (1994). Cognitive load theory, learning difficulty, and instructional design. Learning and instruction4(4), 295-312.

This paper examined the link between instructional design and cognitive load, specifically with the aim of identifying practices that impair learning.   It proposes that all instructional design must be cognisant of the learners and their ability to absorb and process information in a meaningful manner.  It explains the theories which have identified cognitive load, and specifically inhibitors to successful construction of meaning from new information.  It also discusses the interactivity between elements and proposes some guideline ratios between the elements of schemas and their interactivity to facilitate better learning.

The paper starts with information on schema and their construction. It examines the differences between extrinsic schema which are introduced by educators and intrinsic schema which are formed by the learners.  It also examines learning from a functional point of view and attempts to classify different types of learning such as knowledge acquisition and problem solving.  It categorises the difficulties faced by novice learners, intermediate learners, and expert learners.  It suggests progressively different and difficult strategies based on the level of learner that is being targeted and seeks to highlight the roadblocks to learning which are often unknowingly constructed by the educators themselves.  The paper also identifies the difference between educators and instructional designers and acknowledges that these can be two distinctly different roles within an education framework but must be aware of the challenges facing each other.

I really liked this paper for a number of reasons, it brought together a number of facets of the resources that we are building and framed them within the categories of teaching and instructional design.  It showed me the dangers posed by “extraneous” cognitive load and techniques to identify these and minimise them.  It also showed that our project is based on good theory from a learning and instructional point of view and reassures me that we are creating a valid learning artefact.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

All my material from my MSc in Applied E-Learning

%d bloggers like this: