Skip to content

2.4 Evaluation Methodology

2.4.1 Introduction

As mentioned in Chapter 2.2.4, content creators (Mozilla) and pilot audiences (formal and informal educators) work together to create interest-based curriculum while the target audience improves their own web literacy skills. Because the curriculum and activities are modularized and co-designed, the evaluation of the content used in learning situations is ongoing. The various evaluations are run internally (Schütt, 2010) as the need for external evaluators is not yet present.

Peer-based evaluations are used within the course to evaluate the general web literacy competencies of the target audience. These evaluations take place regularly, each week or after each module a new peer-based evaluation is completed. The evaluations pose a series of standardized qualitative questions (see Chapter 2.4.3) that are designed to gather valuable data that can be used both to show the progression of web literacies and to evaluate individual activities in the course.

Formative evaluations are used to determine problem areas and harvest good ideas for the improvement of the content and programming. The content creators play the role of participant observer because adjustments to the overarching program and course material are made based on feedback from participants. Facilitators of the course will informally observe the educators while in face to face sessions, and those educators will serve as focus groups for the curriculum, projects and activities when the course is finished. Data are collected through the interviews, focus groups, and observations on how participants use the materials. Surveys are used to collect responses to targeted questions. Since the target group is made up of mentors who want to share web literacy skills with their own constituents, they have an expected level of new insight, thus the participants serve as a non-random sampling (Flick, 2009).

Summative evaluation is used to study and judge the success of each of the programs, projects, and the overall initiative. Qualitative survey questions used in formative evaluations are mixed with quantitative evaluation methods. The quantitative data are pulled to give an eagle-eye view of the Webmaker initiative’s success as a whole, as well that of its individual programs and processes. Other quantitative data are pulled from metrics outlined in the next chapter. Because the quantitative data described in the next chapter are raw metrics, interpretation is the thing that makes those numbers paint the picture of whether or not the concept is successful. Reflections on how this data can be interpreted are also contained in the next chapter.

A number of different metrics are used to collect data about the viability of the educational concept, its application in various context and the ever important fun factor of the material.

To zero in on problems with the concept itself, it makes sense to run evaluations on each of the four levels at which blending can occur. Each level of the blended learning educational concept is looked at independently and those results are catalogued to evaluate the concept as a whole.

2.4.2 Quantitative Metrics and Approaches

The following data is collected to determine understanding about and awareness of the Open Web and Mozilla learning initiatives.

• Basic demographics

• Number of total page views by language and country

• Referrer stats

Click Stream

At the institutional level, collecting demographics allows Mozilla to see where there are spikes in activity. This information helps Mozilla develop strategic partnerships to further their mission. Both demographics and number of page views organized by locale also helps the foundation target specific localization practices and communities. Referrer stats are important for partnership development and evaluation, and, on the institutional level, the clickstream is important for responsible monetary expenditures.

At the program level, collecting the demographics of users allows Mozilla to further focus their programs to specific interest groups. Total number of page views by language and country furthers this definition. This allows Mozilla to spend its resources designing programs that many different types of people are interested in. Thereby influencing statistics in participation depth that show progress towards the ten million Webmakers marker. Knowing which referrers are the most valuable helps to streamline resources and eliminate erroneous spending on marketing and/or partnerships at the program level. Following the clickstream helps Mozilla understand what interest groups are looking for when accessing a problem, thereby influencing the UI (user interface) or UX (user experience) of individual components of the site.

At the course level, demographics will help Mozilla determine what target groups are most interested in which courses. This is different from the program level in that at this level, age group and cultural expectations are considered, rather than just the interest group. The total number of page views per language and country will further support this evaluation. This is helpful to tailoring course work and instructional methods by further defining the target audience that is most interested in accessing this material. Mozilla then has the opportunity to try out new courses targeted at other groups. Referrer stats and clickstream at this level lead to the development of content that inspires understanding, which will further Mozilla’s ability to create content that pushes people to becoming webmakers.

Much like the course level, these metrics on the activity level are valuable in understanding the groups accessing the materials. Collecting these four metrics at the activity level will further help with targeting audiences, finding applicable partners, eliminating erroneous spending, and reviewing content.

Participation depth metrics determine the reach of Mozilla programs and the extent to which learners are delving into the content. Other metrics that are important to collect include:

IP

• Session Duration and Clicks per Session

• Think time

Conversion rate

• Share of users who never publish work

Although IP logging is a raw metric, with thousands of users it is valuable to see how deep into the Mozilla programming users are going. By cross comparing IP logs between levels as well as between programs, Mozilla is able to quantify influence and participation depth across the board. Collecting session durations and clicks per session and seeing an increase in these two metrics over time further underlines this viewpoint. Think time can be used to filter out users who simply browse, as opposed to learn. Decreasing negative conversion rates is important to show strength of programming.

Skill improvement can be determined by using this concept’s embedded assessment mechanism, badges, and submitted work. Two further metrics are added to the quantitative data collected:

• Number of Badges issued over time (organized by badge type)

• Number of Links to participants work (gathering external links will allow us to see what/if people are making)

The more badges that are issued and the more quality links that are submitted to Mozilla sites, the clearer the influence of Mozilla on skill improvement. These skill improvements are reviewed at the activity and the course level.

All of the aforementioned data can be collected by implementing logging across the board.

The last quantitative metric is also the most important metric because it shows the reach of the overarching ethos of the open web community. This metric, called “Contribution” is the number of people who actively contribute to the Mozilla open source community. In order to prove contribution, counts will be made with the following metrics:

• Number of Code Contributions

• Number of Curriculum Contributions

• Number of Individual Contributors

• Number of Events Run without Mozilla Influence (i.e. Events run using Mozilla materials, but not funded by or otherwise supported by Mozilla)

These four metrics will be collected on each blended level. This data will be pulled by program heads and submitted to the organization to review.

2.4.3 Qualitative Metrics and Approaches

At the institution level, qualitative data shows community support for marketing and partnership strategies. At the program level, this data helps interest group and software strategies. Each program collects qualitative data during courses and activities, which shows the interest and use of individual learning paths and activities. Because this concept proposes a train the teacher system in which courses are run by Mozilla for educators, the qualitative data is collected through observation, focus groups and interviews. Program directors and project members collect this data during courses as well as at the end of courses. Mozilla observes educators while training them to run each course and focus groups are run to garner feedback on individual methods or materials.

Learners are asked to complete a qualitative and quantitative survey at the end of a course to give Mozilla feedback. That survey is included in Appendix II.

In addition, learners use peer assessment to assess one another. These observations are also collected by Mozilla to further understand the success of the program, courses or activity.

Peers will use the following guidelines1 to assess one another:

Evidence of Data Gathering

• How well did your peer show that he/she could gather assets (images, text, video and other data from the web) to voice his/her own opinions in a web native story?

• How well did your peer attribute the resources he/she used? Would you be able to find those resources again?

Evidence of Understanding

• Would you say that your peer really understood this week’s theme and why it is relevant to web native filmmaking?

• How well did your peer explain the material in his/her own words?

Evidence of Reflection and Analysis

• How well does your peer’s work incorporate feedback from others? Did his/her work change after sharing with and speaking with you or your other peers?

• Would you say your peer expressed a clear opinion on his or her topic (i.e. the theme of his or her project)?

Evidence of Creativity

• Would you say that your peer really had a solid grasp on their topic?

• Would you say that your peer represented another take or perspective on the topic that you had not really thought of before?

Once the target group has completed the course, they are asked to run the course for their own target audiences. Once the target group has run the course, they too are asked to complete a survey (Appendix III) to help Mozilla and its partners improve on the offered content.

Qualitative data collected on each level gives anecdotal and practical evidence for the Webmaker initiative’s success.

1 Guidelines created via email collaboration with Ingrid Dahl of the Bay Area Video Coalition.

Enhanced by Zemanta

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.