Learning with MOOCs for professional development

 

8573233746_55b5dd281c_o

Since the beginning of this year I’ve been working on the EU-funded bizMOOC Project, a consortium of HEIs and Industry partners from eleven European countries “to enable a European-wide exploitation of the potential of MOOCs for the world of business”.

So far I’ve been mostly involved in delivering the outcomes of WP3: three pilot MOOCs focusing on life-long learning and key business competences, namely learning to learn, sense of initiative (entre-/intrapreneurship), and innovation and creativity. In truth, I am not working on three MOOCs but developing one in collaboration with DIDA (Italy) and The National Unions of Students in Europe (Belgium), and overseeing the production of the other two, led by colleagues in Germany and Spain.

What kick started things for me was our project meeting in Cardiff last February, where partners participated in a workshop aimed at guiding us through the discussion of the whos, whats and hows that would shape the design of each MOOC –what type of MOOC?, who is it for?, what are the learning objectives?, how do we get there? and so forth. If you are interested in learning design, Beck Pitt has put together the slides and audio recorded during the event to create a ‘How to produce a MOOC?’ video that explains these questions better than I can here.

In terms of content, it was decided earlier on that our L2L MOOC would reuse as much material already available as possible. OpenLearn holds very good courses on the topic under a CC license that allows adaptation. In Cardiff we also sat down to quickly review these and decide which sections we could borrow, and what gaps needed to be filled in. After that, my job was to go away and assemble a first draft on OpenLearn Create (OLC), the OU’s free educational platform for anyone to publish open content. OLC is Moodle-based and fairly easy to use, although I must confess it can be frustrating too: the editing menu is limited, which means that you’d better know some html or pull your hair in desperation.

As I write, ‘Learning to learn with MOOCs for professional development’ is sitting with all project partners for final review. It is a free course that lasts four weeks with approximately three hours of study time per week. In fairness though, you could run very quickly through some of the sections and concentrate on what interests you most.

Week 1 starts with some snippets of stories from learners who have had different experiences of learning online, and serve as an introduction the importance of reflection in learning. Week 2 is all about MOOCs: what is a MOOC, what are the advantages of taking a MOOC, where do you find MOOCs, how do you assess their quality, etc. In talking about the skills that you need to be successful in a MOOC, you really are talking about the skills needed by the workforce to run effectively in the digital world, so that’s why week three and week four look into this: your online identity, your digital footprint, dealing with fake news and information overload, media to collaborate online, and more.

Overall ‘Learning to learn with MOOCs…’ is quite a reflective course. We didn’t make it too interactive on purpose; one, because we knew we wouldn’t have the capacity to moderate the forums for ever (the course will not be taken down after the initial four weeks); and two, I think you can’t force anyone to take part if you are not there yourself to make it happen or to encourage participants. Having said that, the set up at the moment is for learners to receive a free statement of participation upon completion, if they read all sections (actually, a click on each screen will do) and post a message in the crowdsourcing forum of week 2. I may change this before we go to launch, but it will be interesting to see whether people take the bait: if you want your piece of paper, you’ll have to contribute and work for it, ha!

The course is already available here. For now it is protected by a password and not open for enrolment, but bookmark the link: we’ll be ready to go on October 16th.

Featured image: Say MOOC by Audrey Watters, CC BY-SA 2.0

Advertisements

From Ireland to the UK (and back): my open story

Featured image: Walton Hall by B. de los Arcos, CC BY-NC

Yesterday I took part in an online hangout organised by the team behind #101openstories, hosted by the wonderful Jenni Hayman. The initiative started a few months back when six fabulous women, all members of GO-GN, got together to invite the rest of us to tell our open stories. I’m sticking the recording at the end of this post, but to save Jenni the awful task of having to transcribe what was said, here is how it all started for me. In fairness, I’m also keeping a promise I made on Twitter, although I should have used my own account, not GO-GN’s. I got mixed up!

If I’m gonna tell this story properly, I have to go back to the year 2000 when I started teaching with the Open University. My background is in languages and at the time I was teaching Spanish at University College Galway (which then became NUI Galway). The OU, however, was distance education and a whole new adventure that got really exciting when from 2003, I think, we went fully online –I no longer met my students face-to-face but spoke to them once a month using a synchronous audiographic conferencing system developed in-house called Lyceum. It was fantastic. I loved every minute of it. (Remember Mattie the cat, Fernando?). By then I had moved from Galway to Dublin and was studying part-time for an MSc in IT in Education at Trinity College. It was my first excursion into doing research. See, teaching online posed new challenges: how different would I be as an online tutor? How would not having eye contact with my students affect the way I taught? The minor dissertation I wrote was then published as a chapter in a book available to purchase for £130.75 on Amazon. (I’m not even giving you the link). I didn’t know about open then.

Dublin didn’t quite work for a bunch of reasons, but upon returning to Galway and another semester of ‘traditional’ teaching, I wasn’t happy. I felt stuck. After getting a taste for doing research, I wanted more. I applied for a grant to do a PhD at the OU and got it. Hooray! That was probably the most significant moment of my career. It meant leaving Ireland though, for Milton Keynes, the town of roundabouts (I don’t know why they mention cows, I haven’t seen any). The horror! Don’t worry, I said to L, we’ll be back as soon as I graduate. Famous last words. Cue in financial meltdown, our Celtic Tiger caught having a nap. It would take us the best part of eleven years to return to Irish shores (I completed my PhD in 2009) and even now, I can’t say I’m properly back.

But let’s get on with the story. While PhD-ing, I was allowed to continue tutoring OU students; after all, they were the subjects of my research. One day, somebody asked: how would you like to share your teaching materials? The Department of Languages had just launched LORO, a repository of resources for language teachers, an online space that could be accessed by everyone. Here is the thing: if you were teaching beginners’ Spanish, for example, the OU would send you a CD containing materials that you could use in your online class, either as is or customised as you pleased. I had done a lot of that: had a look at what we were given, maybe used a resource without any changes once or twice, then adapted it according to what I thought had worked or not for my students; I had reused OU materials to build my own versions of screens and exercises. But I had no idea what the materials for the intermediate or advanced Spanish classes were, and even less of a clue what my colleagues teaching Chinese, Welsh, German or French were doing with their beginners. With LORO, that problem was instantly solved.

Soon I moved from being a user of the repository to helping others use the repository. It was also when I first learnt of Creative Commons licences. PhD under my belt, my next job was with the HEFCE-funded Support Centre for Open Resources in Education (SCORE), where my role was supporting anyone and everyone who was interested in using the OU’s Labspace (today OpenLearn Create) to set up an open course, for instance. I’m afraid at the time I didn’t think of it as the momentous event it probably was, but I was there when David Wiley himself shared with SCORE fellows his now lauded thoughts around the letter R.

Then came OT12, a MOOC on Open Translation tools and practices, and the SCORE Microsites, two collections of OER: one on research skills for international students thinking of doing their PhD in the UK; the other, for digital scholars.

At the end of 2012 I was appointed as one of the researchers with the Hewlett-funded OER Research Hub project, now Open Education Research Hub, and that was it, I haven’t looked back since. This post would get too long if I went into all that happened to us OERHubbers so maybe another time. Suffice it to say that there was ExplOERer and OEPS, and there is GO-GN (pride of place), bizMOOC and UK Open Textbooks. I’m not officially teaching any more; it wasn’t my own choice but because OU student numbers in the Republic of Ireland went downhill at such a speedy pace, that I was invited to volunteer for redundancy. (Blame, among other, the rise in fees and the demise of the ‘single module’ learner). But I’m still an educator, in a funny way, and I’d like to think that I’m still helping others think about open.

Here is the #101openstories recording from July 24th, 2017, with Anne Algers, Martin Weller and myself.

OER in schools

Despite great initiatives like the DigiLit Leicester project on these shores and K12 OER Collaborative in the US, OER folks’ attention does not focus often enough on the K-12/school scene. TJ Bliss writes “If we want OER to become the default, we need people to use OER and to know that they are using OER. In my experience, lack of OER awareness runs mostly unopposed among schoolteachers, who more than anyone should be supported in championing open education.

If you do a quick search for studies of OER use in schools, the return is only a handful of publications and reports. Our open-access paper to come out hopefully in the next few months will help fill this gap. In the meantime, the infographic below presents a frequencies analysis of data collected from surveys conducted by the OERRH until December 2014, in total a sample of 657 K-12/school educators across the globe. Open Education week is nearly over and we are seeing it out with a bang!

Download pdf here.

Data on the use of OER by school educators

OER at #iNACOL14

 

The 9th (and to date largest) iNACOL Blended and Online Symposium has now concluded. Joining the nearly 3,000 attendees at the massive Palm Springs Convention Center, I made my way from Pueblo to Sierra via San Jacinto and Catalina, ice cream in one hand and lemonade in the other, navigating the talks in the OER participant track –and believe me, with over 200 concurrent sessions packed in two and a half days, I very much welcomed a path to follow.

It wasn’t by coincidence that in her welcome address Susan Patrick, CEO identified open education and OER as one of the top ten trends driving the future of education: iNACOL are key contributors to the development of OER through policy –see for instance OER State Policy in K-12 Education: Benefits, Strategies, and Recommendations for Open Access, Open Sharing and OER and Collaborative Content Development.

At this year’s conference Karl Nelson, Director of the Digital Learning Department for the Washington State Office of Superintendent of Public Instruction (OSPI) referred to current legislature to foreground his talk on evaluating OER: the state of Washington’s “recent adoption of common core K-12 standards provides an opportunity to develop a library of high-quality, openly licensed K-12 courseware that is aligned with these standards”. The familiar ‘it may be free, but is it any good?’ case initiated a review process of OER in Math and English Language Arts (ELA) to help educators select high-quality resources, provide information for materials adoption and identify gaps in alignment with Common Core State Standards (CCSS). Not small OER, mind you, but full courses that districts could adopt rather than spend money in a textbook.

The evaluation rubric combines five existing review instruments. For breadth, Common-Core alignment; publishers’ criteria, an overview of curricular materials (i.e. entire courses) that integrates content and practice; and reviewers’ comments, –‘Would you use this material in your classroom?’, ‘What is the ideal scenario for this resource?’, etc. For depth, the EQuiP rubric, which is unit-focused and measures overall quality when compared to CCSS; and a subset of the Achieve OER rubric, designed to evaluate the quality of digital materials.

The outcome of the review is not only an important library of K-12 open resources, but also a methodology for districts to replicate as they adopt OER. Kudos to both efforts but my slight gripe with spreading this ‘how-to’ is that, at least on first impressions, it’s a fairly complicated task even for a dedicated and trained team of educators/reviewers.

“Teachers think they don’t have the stuff to make Common Core work”, said Karl; this gap is about to be filled by K-12 OER Collaborative, a state-led project supported by Creative Commons, Lumen Learning and others. Nelson was nearly as tight-lipped as his co-presenters, Jennifer Wolfe from The Learning Accelerator and Layla Bonnot from the Council of Chief State School Officers (SSOO), or at least just enough to build up the excitement about an official RFP likely to be announced during OpenEd next week: the call to create openly-licensed, high-quality, common-core aligned comprehensive modules for K12 Math and ELA will be open to all content developers. Interested? Watch the space.

The slides for my own presentation ‘Teaching and Learning with OER: What’s the Impact in a K12 (Online) classroom?’ are available here.

From Chile, with love

Universidad de Chile

Photo: CC BY-NC celTatis https://flic.kr/p/piiCLb

I’m intrigued. The façade of the Universidad de Chile on Avenida Libertador Bernardo O’Higgins in Santiago displays a huge banner with a quote from Nicanor Parra, the anti-poet: “Don’t stop being a flea in the minotaur’s ear”. The university celebrates Parra’s 100th birthday recalling part of the speech he gave at the opening of the academic year in 1999. I haven’t read Parra, I’m not familiar with his subject matter but if I had a guess at what the quote means I’d think of the role of a critical university, standing up to the relevant powers with its own voice and annoying the hell out of the establishment. And maybe I wouldn’t be too far off. In a letter to Luis Riveros, Parra’s signature reads ‘Académico de la muela del juicio’ –Academic of the Wisdom Tooth. So irreverence counts, be it flea-style or of a more dental nature.

Staying on topic, it’s exasperation that sums up my participation in the VI Congreso Iberoamericano de Pedagogía, hosted and (dis)organised by Universidad Católica Silva Henríquez and Pontificia Universidad Católica de Valparaíso. Where do I start? No abstracts. No time keeping. No information. No place to display conference posters. No conference proceedings. No wifi. No substance. And who the heck schedules a tour of Valparaíso, UNESCO World Heritage site, to coincide with presentations? Had I not hitched a lift back to Santiago with two Chilean academics who agreed that Silva Henríquez could have done a much better job, I would have been happy to suggest that sending a spy over to Europe to learn how to organize a conference was a damn good plan. Oh nasty. Me retracto de todo lo dicho. I take back everything I said. Not.

On a different note, it’s been interesting to learn that getting an education in Chile is extremely expensive, regardless of universities being state-owned or private. Werner Westermann, who leads one of ROER4D’s sub-projects, tells me that attrition rates in his institution reach nearly 50% in first year, partly blamed on students seriously lacking basic learning skills. His study of the effectiveness of OER use to improve freshmen’s mathematical and logical thinking is the only initiative I know of that relates to open education in the region. Surprising. I can’t think of a scenario where OER and open education make better sense.

Cleaning our way to a monster dataset

In February of 2013 the newly put together OERRH team completed the humongous task of creating a bank of survey questions which would be one of the main research instruments to collect data around the project’s eleven hypotheses. Bear one thing in mind: at the time, each of us was working with a different collaboration –OpenStaxSaylor AcademyFlipped Learning Network,OpenLearnTESS-IndiaCCCOER, etc.; initially, each collaboration was allocated a different hypothesis, which also meant a different pick of questions from the survey bank and a different version of the survey. I’ll give you a couple of examples: our collaboration with the Flipped Learning Network originally focused on teachers’ reflective practices so flipped educators never answered questions on the financial savings of using OER; students using OpenLearn were not asked about the impact of OER on teaching practices; informal learners did not have questions that related to formal studies, and so on. In addition, collaborations had a stake in the research and input in the design of their survey: questions were discussed further, tweaked, piloted and tweaked again ahead of launching. All in all, we put together 18 different questionnaires. The idea was always there to merge all data into one massive file (what I called the MONSTER) that would allow us to undertake comparative analysis. What follows is the official record of how I laboriously coded, recoded, corrected, deleted and cursed (a bit) through the OERRHub surveys in order to have a squeaky clean dataset.

SurveyMonkey and SPSS don’t talk to each other that well

Every researcher knows that there are errors and inaccuracies that need to be ironed out before you commit yourself to analysing quantitative data. We are all human, right? On this occasion, for the first complication that came my way, I’m gonna blame the software: when exporting data from SurveyMonkey as anSPSS file, your variable labels and values will get confused. Let me explain: say you want to find out about OER repositories, so you create a list in SurveyMonkey and ask respondents to tick options from it to answer the question ‘Which OER repositories or educational sites have you used?’. If you expect the list to appear as variable labels in SPSS, it won’t. Instead, the software will repeat your question in the Label box and use the name of the repository in the Values box with a value of 1.

SPSS1

As it happens, the wonderful OER researcher Leigh-Anne Perryman had a solution in her bottomless bag of tricks: the question design in SurveyMonkey had to be amended for future respondents to have the option to tick either ‘yes’ or ‘no’ for each of the repositories on the list. To sort out the damage with any data already collected, what needed to be done was manually input the name of the repository in the label box, and give the variable a value of 1=yes and 2=no. Tedious but easy to fix.

SPSS2

Editing the survey questions to include a yes/no answer also served to remedy another software mishap: the fact that SurveyMonkey does not differentiate a blank answer from a ‘no’ answer when downloading results as a SPSS file. On this occasion, the required fix wasn’t quick. I closely inspected the data case by case: if the respondent did not choose any of the options in a particular question, I considered each a ‘missing’ value; if the respondent ticked just one option, the blank answers were recoded into a ‘no’ value.

Another curious instance of having to recode data was spotted by Beck as the two of us marvelled over having responses from a total of 180 different countries in the world: I can’t recall whether this was a default list in SurveyMonkey but for some reason Great Britain and the United Kingdom were given as separate choices. Obviously, these had to be combined into one.

Correcting human errors

I put my hand up. The OERRH surveys aren’t exactly short and sweet. As a result, and this is my own take on the matter, the data suffered. In some cases, respondents provided the demographic information but did not answer anything else; they were deleted from the final dataset. Exact fate met those who selected all options in one question, despite being mutually exclusive –I find it hard to believe that someone is studying in school and getting a degree while doing a postgrad at the same time, don’t you?

I’ve decided that for some respondents it must have been easier to provide an answer in the comments box than reading through all the available options; what other explanation can you find for a teacher who answers the question ‘What subject do you teach?’ by writing ‘Engineering’ in the ‘Other’ field instead of ticking that from the 17 items at his disposal? Duly noted and corrected.

In other cases, for instance, respondents would leave unticked ‘MOOCs’ when asked about what type of OER they use, but then add as an open comment that they studied with Coursera or EdX. These had to be corrected as well.

Although written in English, the OERRHub surveys were distributed world-wide: it is difficult to anticipate where people might find the language a barrier, but here is an example: we used the word ‘unwaged’ to inquire about employment status; several respondents left the option unmarked, but indicated “Unemployed” or “No job” in the comments field. Again, these cases were corrected accordingly.

Merging data

Cleaning data is always painstaking work, especially when you are handling thousands of cases, but let’s face it, it is also mostly uncomplicated. What could have been if not avoided at least attenuated was the trouble that I saw myself in when having to merge the data from the eighteen OERRHub surveys. As days went by, the monster dataset grew fatter and fatter, but my love for my colleagues (and myself) grew thinner and thinner. Why? It is true that each of the individual surveys had to be customised as per collaboration but we researchers were a tad undisciplined: there were unnecessary changes to the order in which options were presented, there were items added and items subtracted, and wording altered without consultation. All this made data merging more time-consuming, cumbersome and fiddly than it should have been.

All is well that ends well though. We have a clean dataset that comprises of 6390 responses and is already producing very interesting results. Here is one of the lessons learnt: if you are dealing with multiple researchers and multiple datasets, nominate a data master: one to rule them all and bind them, although not in the darkness. Darkness is bad, open is good.