Wednesday, 26 April 2017

#Mobile #assessment based on self-determination theory of motivation #educon17

Talk given at Educon in Athens, Greece by Stavros Nikou, really interesting mobile learning addition in the area of vocational and learning assessment. Mobile devices in assessment: offer and support new learning pedagogies and new ways of assessment: collaborative and personalised assessments.

Motivation of the framework is aiming to address: following the self-determination theory (http://selfdeterminationtheory.org/theory/) : intrinsic and extrinsic motivation. Intrinsic motivation works from insight of the person, and because it is enjoyable. Extrinsic is build upon reward or punishment. What they try to do is get more intrinsic motivation ignited, as it leads to better understanding and better performance.

There are 3 elements in the theory: autonomy, competence, relatedness all of this impacts the self-determination. This study try to use these three elements to increase intrinsic motivation.
Mobile-based assessment motivational framework: the framework is still in a preliminary phase, but of interest. Autonomy: personalised and adaptive guidance, grouping questions into different difficulty levels (adaptive to learner), location specific – context-aware.
Competence: provide emotional and cognitive feedback that is immediate. Drive students to engage in authentic learning activities, appropriate guidance to support learners.
Preliminary evaluation of the proposed framework: paper based and mobile based assessments used prior and after intervention to test out the framework. Using an experimental design, assessments after each week of formal training, two assessments in total for both groups. ANCOVA data analysis used.

Results: significant difference of autonomy, and competence, and relatedness. The framework will be expanded with additional mobile learning features, and framework will be used with different students. Future research wants to enhance the framework.
The mobile assessment had a social media collaborative element in it, and it also made use of more feedback options due to the technical possibilities that the mLearning option had.


Using #learningAnalytics to inform research and practice #educon17

Talk during Educon2017 by Dragan Gasevic known for his award-winning work of his team on the LOCO-Analytics software is considered one of the pioneering contributions in the growing area of learning analytics. In 2014 he founded ProSolo Technologies Inc (https://www.youtube.com/watch?v=4ACNKw7A_04) that develops a software solution for tracking, evaluating, and recognizing competences gained through self-directed learning and social interactions.

He jumps up the stage with a bouncy step and was in good form to get his talk going.

What he understands under learning analytics is the following: shaping the context of learning analytics results in challenges and opportunities. Developing a lifelong learning journey automatically results in a measuring system that can support and guide the learning experience for individuals.
Active learning also means constant funding, to enable the constant iteration of knowledge, research and tech. But even if you provide new information, there are only limited means to understand who in the room is actually learning something, or not. So addressing the need to get meaningful feedback on what is learned is the basis of learning analytics.
Learning system (e.g. LMS), we also use socio-economic details of individuals
No matter which technologies are used, the interaction with these technologies results in digital footprints. Initially the technologists used the digital footprints as a means to adjust the technology. But gradually natural language processing, learning, meaning creation… also became investigated using these technologies.

Actual applications of learning analytics are given: two well known examples
Course Signals from the Purdue university: analysing the student actions within their LMS (blackboard), different student variables, outcome variables for student risk (high, mediat, low risk) provided by algorithms using the data from the digital footprints of each students. The teachers and students got ‘traffic light’ alerts. Those students they used the signals, had an increase of 10 to 20 percent student success.
Doing a content analysis of using course signals, summative feedback seemed to have much less related to student success, but formative (detailed specific) feedback did have immediate effect on learning success.
University of Michigan E2Coach (top 2 public universities in US). They have large science classrooms, but populated by students with very varied science grade background.
In the E2coach project, they used the idea of ‘better than expected’, so they looked at successful learning patterns: successful students would be adaptive (trying different options to learn), and those who self-organised in peer groups, to enable content structuring.
Top performing students were asked to give pointers on what they did to be successful learners. Those pointers were given to new students to provide them feedback on how they could increase learning success, but at the same time giving them the option to learn (self-determination theory). This resulted in about 5 percent improvement of learner success.

Challenges of learning analytics
Four challenges:
Generalisability: while we are seeing predictive models for student success, but they only extend to what can be generalised. Significance of these models were not too applicable across different context, so the generalisability was quite low. Some indicators seem to be significant predictors, yet in other contexts they are not. So what is the reason behind this. This means we are now collecting massive amounts of MOOC data to look for specific reasons. But this work is difficult, as we need to understand what questions do we need to address.
Student agency: also a challenge. How much of student decisions are made by themselves, but the responsibility of learning is in their hands in their hands.
Common myth in learning analytics: more time spend on tasks, the more they will learn. Actually, this is not the case, more reverse actually. Even time with educators is frequently showing that it is an indicator for poor learner success.
Feedback presentation: we felt that the only way to give feedback is visualisation and dashboards. But many different type of vendors involved in learning analytics look into dashboards. But they sometimes these dashboards are harmful, as the students compared with the class performance, resulting in less student engagement and learning. Students sometimes invested less time as they felt from the dashboards they were doing well, so with less investment less learning.
Investment and willingness to understand: http://he-analytics.com and the SheilaProject  http://sheilaproject.eu/ 50plus senior leaders investigated for their understanding of learning analytics. Institutions hardly provide opportunities to learn what learning analtytics are really about. Lack of leadership on learning analytics, so in many cases they are not sure what it entails, or what to do with it. So that results in buying a product… which does not make sense.
Lack of active engagement of all the stakeholders: students are mostly not involved from day one in development of these learner analytics (no user-centered approach).

Direction for learning analytics
Learning analytics are about learning. So we need to fall back on what we already know about learning, then design certain types of intervention using learning analytics. Learning analytics is more than data science, it provides powerful algorithms, machine learning algorithms, system dynamics… but we are end up into a data crunching problem, as we need Theory (particular approaches: cognitive load, self-regulation), practices also inform where to go. We need to take into account whether these results make sense. Which of the correlations are really meaningful, which make sense… but at the same time e need to take into account learning design and the way we are constructing the learning paths for our students. We cannot ignore experimental design, if we also are using meaningful learning analytics. We need to be very specific about study design. Interaction design is for types of interfaces, but they need to be aligned with pedagogical methods.

How does this result in the challenges mentioned before
Generalisability: if we want to make sure we think about this, we need to take into account that one fits all will never work in learning. Different mission, different population, different models, different legislation.. level of individual courses. Differences in instructional design, different courses need different approaches. It is all about contextual information. So what shapes our engagement? Social networks work only for those called weak ties. Networks with only strong ties restrain full learning success. Data mining can help us to analyse networks: exponential random graphs (not sure here?). Machine learning transfer: using it across different domains. Recent good developments addressing this.

Student agency: back to established knowledge (2006 paper: students use operations and to create artefacts for recall or trying to provide arguments or critical thinking). The student decisions are based on student conditions: prior knowledge, study skills, motivations… all of these conditions need to be taken into account. Identifying sub-groups of learners based on algorithms. Some students are really active, but not productive. Some students were only performing were only mediocre active, yet very good performing in terms of studying. Study skills are changing, priorities are changing during learning… so this means different learning agency. Desirable difficulties need to be addressed and investigated. So no significant success between the highly active and mediocre active students, which needs to be studied to find reasons behind it. Learners motivation changes the most during the day as can be seen from literature. So we need to focus to understand these reasons, and to set up interdisciplinary teams to highlight possible reasons while strongly grounding it in existing theory.

Analytics-based Feedback: students need guidance, not only task specific language indicators. This can be done by semi-automatic teacher triggers to provide more support and guidance, resulting in meaningful feedback used by students. (look up research from Sydney, ask reference Inge). Personalised feedback have a significant effect (Inge, again seen in mobimooc). http://ontasklearning.org
Shall we drop the study of visualization? No, it is significant for study skills, and decision making on analytics, but we need to focus on which methods work and to gradually involve visualisations to know what works, what not. Taylor it to specific tasks, design it in a different way than up till now.

Development of analytics capacity and culture: ethics, privacy concerns, very few faculty really ask students for feedback. What are the key points for developing culture: think about discourse (not only technical specs). We need to understand data, we need to work with IT, using different type of models not only data crunching, start from what we know already, finally transformation: we need to step away from learning analytics as technology, we need to talk to our stakeholders to know how to act, how to do it, who is responsible for certain things, … the process can be inclusive adoption process (look it up). We need to think about questions, design strategies, working on the whole phenomenon we need to involve the students. Students are highly aware of the usefulness of their data.

If we want to be successful with learning analytics we need to work together if we want to make a significant difference or impact.

Tuesday, 25 April 2017

Liveblog from university to continued professional learning #educon17 #lifelonglearning

Frank Gielen talks on innovation adoption and transformation. This talk is part of the pre-conference talks of the Educon conferencein Athens, Greece. The talk looks at how to organise learning (master, professional, phd…) to gradually move towards lifelong learning.

Skills gap between what the companies want and which human resources and innovations are available.
The human capital is missing frequently, which means that education is increasingly important.
If you want to transform your ‘old’ energy approach to sustainable or renewable energy approach(es).
So education is core in the innovation process, as you need to train all stakeholders (senior management, workforce on the floor, mid-management…).
Education linked to innovation has two main factors impacting it: speed of adoption (graduates need to be skilled), timeliness (skills need to be used within 2 months at least).
So innovation speed is equivalent with training need. Learning needs to be adapted to speed of innovation.

Personalising learning
Starting from the knowledge triangle: education, industry and research as a baseline for higher education goals which needs to be combined in order to create an employable highly trained workforce coming out of higher ed.
What learning trends are important to stay competitive in the market: Continued Professional Development, become power learners. This means that the human factor needs to be continually developing, in order to be on top of a high turn-around field.
No one size fits all, so in education this means personalised learning, the role of the teacher changes that instead of having a lot of lectures, having online resources which students are knowledgeable to use to create a constant base-line, adding mentoring e.g. the Socratic approach where the teachers are in close contact and support learners.
Solving a challenge also includes having an effect on society.

Merging masters with professional learning
Contemporary learning consists on average of: 70 percent informal learning, 20 percent social learning, 10 percent formal training.
MicroMasters (short online format 10-15 ECTS and commonly project based), in many cases complimentary for the campus teaching, but enabling a blended master. This is something we need to consider as InnoEnergy. But in many cases microMasters are linked to deepening learning in a specific field, and is frequently based on a general foundation (so need for clear learning paths). 
We are shifting towards lifelong learning, blurring the boundaries between master schools, doctoral schools and professional schools.
Learning architecture: MOOCs or microMaster, certified microMaster, blended microMaster with coaching, blended-in-house microMaster with coaching and Bring Your Own Program (BYOP).
A new learning paradigm: personalised, just-in-time learning.
Education is going through a digital transformation. This means that more data is available, which we can start using as a means to support learning. Based on this personalised learning will become available, and lacking skill sets can be found. Data driven education comes a bit closer to enabling personalised learning.
Feedback and coaching has the highest learning impact. This means that teachers need to be prepared to become a guide-on-the-side or a good coach.

Learning entrepreneurship
Learners need to learn it. But not all of the students need to be entrepreneurs, but all of the students need to understand an entrepreneurial skill set: see opportunities, motivate people, drive change, find scarce resources, deal with the uncertainty of innovation. But … then how we measure this, and assess it?

This means being an early adopter, and being a catalyst for educational innovation.

Friday, 21 April 2017

Novel initiative Teach Out: Fake news detecting #criticalthinking #mooc

If you have just a bit of time this week, and you are interested in new ways of online teaching as well as critical thinking... this is a fabulous initiative. The “Fake news, facts and alternative facts” MOOC is part of a teach out course (brief yet meaningful just-in-time learning initiative focused on a hot topic).

Course starts on 21 April 2017 (today)
Course given by the University of Michigan, USA

This is not just a MOOC, actually, it being a MOOC is the boring part. What is really interesting is the philosophy behind the teach out, and the history behind the teach out events. This feels a bit more like an activist driven teaching, admittedly here with a renowned institute.



Brief course description
Learn how to distinguish between credible news sources and identify information biases to become a critical consumer of information.
How can you distinguish between credible information and “fake news?” Reliable information is at the heart of what makes an effective democracy, yet many people find it harder to differentiate trustworthy journalism from propaganda. Increasingly, inaccurate information is shared on social networks and amplified by a growing number of explicitly partisan news outlets. This Teach-Out will examine the processes that generate both accurate and inaccurate news stories and the factors that lead people to believe those stories. 

Participants will gain skills help them to distinguish fact from fiction.

This course is part of a Teach-Out, which is:
·        an event – it takes place over a fixed, short period of time
·        an opportunity – it is open for free participation to everyone around the world
·        a community – it will be joined by a large number of diverse individuals
·        a conversation – an opportunity to give and take ideas and information from people

The University of Michigan Teach-Out Series provides just-in-time community learning events for participants around the world to come together in conversation with the U-M campus community, including faculty experts. The U-M Teach-Out Series is part of our deep commitment to engage the public in exploring and understanding the problems, events, and phenomena most important to society.

Teach-Outs are short learning experiences, each focused on a specific current issue. Attendees will come together over a few days not only to learn about a subject or event but also to gain skills. Teach-Outs are open to the world and are designed to bring together individuals with wide-ranging perspectives in respectful and deep conversation. These events are an opportunity for diverse learners and a multitude of experts to come together to ask questions of one another and explore new solutions to the pressing concerns of our global community. Come, join the conversation!

(Picture: http://maui.hawaii.edu/hooulu/2017/01/07/the-real-consequences-of-fake-news/ )

Companies should attract more Instructional Designers for training #InstructionalDesign #elearning

Online learning is increasingly pushing university learning and professional training into new directions. This means common ground must be set on what online learning is, which approaches are considered as best practices and which factors need to be taken into account to ensure a positive company wide uptake of the training. Although online learning has been around for decades, building steadily on previous evidence-based best practices, it is still quite a challenge to organize online learning across multiple partners, let alone across cultures (in the wide variety of definitions that culture can have).

Earlier this month Lionbridge came out with a white paper entitled “steps for globalizing your eLearning program”. It is a 22 page free eBook, and a way to get your contact data. The report is more corporate than academically inclined (subtitle is ‘save time, money and get better results’), and offers an insight look of how companies see global elearning and which steps to take first. But when reading the report - which does provide useful points - I do feel that corporate learning needs to accept that instructional design expertise is necessary (the experts! the people!) and needs to be attracted by the company, just like top salespeople, marketing, HR … for it is a real profession and it demands more than the capacity to record a movie and put it on YouTube!

In their first step they mention: Creating a globalizing plan
  • Creating business criteria
  • Decide on content types
  • Get cultural input
  • Choose adaptation approach

The report sets global ready content as a baseline: this section mentions content that is culturally neutral. Personally, I do not belief cultural neutrality is possible, therefor I would suggest using a cultural, balanced mix, e.g. mixing cultural depictions or languages, even Englishes (admitting there is more then one type of English and they are all good). But on the bonus side, the report also stresses the importance of using cultural native instructional design (yes!), which I think can be learner-driven content to allow local context to come into the global learning approach. Admittedly, this might result in more time or more cost (depending on who provides that local content), but it also brings the subject matter closer to the learner, which means it brings it closer to the Zone of Proximal Development (Vygotsky) or enables the learner to create personal learning Flow (Csikszentmihalyi) or simply to allow the learner to think ‘this is something of interest to me, and I can learn this easily’.

In a following step: Plan ahead for globalisation
  • Legal issues: looking at IPR or the actual learning that can be produced. 
  • Technology and infrastructure: infrastructure differs. 
  • Assessment and feedback mechanisms: (yes!) Feedback, very important for all involved
  • Selecting a globalizing partner

The report is brief, so not too much detail is given on what is meant with the different sections, but what I did miss here was the addition of peers for providing feedback, or peer actions to create assessments that are actually contextualized and open to cultural approaches. No mention of the instructional design experts in this section either.
In the third section a quick overview is given on what to take into account while creating global elearning content, again the focus is on elements and tools: using non-offensive graphics, avoiding culturally heavy analogies, neutral graphics…, not on the actual instruction, which admittedly would take up more than 22 pages, but the instructional approach is to me the source of learning possibilities.

Promoting diverse pedagogy
The final part of the report looks at the team you need, but …. Still no mention of the instructional design expert (okay, it is a fairly new title, but still!). And no mention of the diversity in pedagogy that could support cultural learning (not every culture is in favor of Socratic approaches, and not every cultural group likes classic lecturing).

Attract instructional designers
While the report makes some brief points of interest, I do feel that it lacks what most reports on training are lacking, they seem to forget that online instruction is a real job, a real profession with real skills and which does take years to become good at, just like any STEM or business oriented job. This does indicate that corporations are acknowledging an interest in online training (and possible profit), but … they still think that it can be built easily and does not require specific expertise.
There is no way around it: if you want quality, you need to attract and use experts. If you want to build high quality online training that will be followed and absorbed by the learner, interactions, knowledge enhancement, neurobiological effects… all of this will matter and needs to be taken into account (or at least one needs to be aware of it).
Now more than ever, you cannot simply ‘produce a video’ and hope people will come. There are too many videos out there, and a video is a media document, not necessarily a learning element. Learning is about thinking about the outcome you want to have, and then work backwards, breaking the learning process down into meaningful steps. Why do you use a video? Why do you use a MCQ? Does this really result in learning, or simply checking boxes and consuming visual media?

Building common ground as a first global elearning step
Somehow I feel that the first step should include overall acceptance of a cooperatively build basis:
What are our quality indicators (media quality, content quality, reusability, entrepreneurial effect of the learning elements, address global diversity in depicting actors (visual and audio), …)

Which online learning basics does everyone in the company (and involved in training) need to know: sharing just-in-time learning (e.g. encountering a new challenge: take notes of challenge and solution), sharing best practices on the job (ideal for mobile options), flipped lectures for training moments (e.g. case study before training hours, role play during workshops…), best practices for audio recordings … these learning basics can be so many things, depending on the training that needs to be created, but it needs to be set up collaboratively. If stakeholders feel they will benefit from training, and they are involved in setting up some ground rules and best practices, they are involved. It all comes down to: which type of learning is needed, what does this mean in terms of pedagogical options available and known, and what do the learners need and use. 

Tuesday, 21 February 2017

2 Free & useful #TELearning in Higher Ed reports #elearning #education

These two reports give a status of TELearning in 2016: one analysing the Technology Enhanced Learning for Higher Education in the UK (233 pages, with appendixes starting at page 78) and case studies of Technology Enhanced Learning (48 pages, with nice examples). I give a brief summary below.

The reports were produced by UCISA (Oxford univesity based network) representing many major UK universities and higher education colleges and it states to have a growing membership among further education colleges, other educational institutions and commercial organisations interested in information systems and technology in UK education.

The used definition of TELearning is: "Any online facility or system that directly supports learning and teaching. This may include a formal VLE (virtual learning environment), e-assessment or e-portfolio software, or lecture capture system, mobile app or collaborative tool that supports student learning. This includes any system that has been developed in-house, as well as commercial or open source tools."

Both reports provide an interesting (though UK-oriented) read. Here is a short overview of what you can find in them:

The report focusing on the TELearning for HE in UK (based on the TELearning survey), I have put the main conclusions next to the main chapters:

Top 5 challenges facing institutions: Staff Development is the most commonly cited challenge, Electronic Management of Assessment, lecture capture/recording continues to move up, technical infrastructure, legal/policy issues.

Factors encouraging the development of TELearning: Enhancing the quality of learning and teaching, meeting student expectations, improving student satisfaction are most common driver for institutional TEL provision. Availability of TEL support staff, encourages the development of TEL, feedback from students, availability and access to tools, school/departmental senior management support. In terms of barriers for TELearning: lack of time, development & consolidating, culture continues to be a key barrier, with Departmental\school culture, and Institutional culture, internal funding, and lack of internal sources of funding to support development.

Strategic questions to ask when considering or implementing TELearning: with Teaching, Learning and Assessment consolidating, the rise of the Student learning experience/student engagement strategy, corporate strategy and library and Learning Resources.

TELearning currently in use: main institutional VLE remains Blackboard and Moodle.
Moodle remains the most commonly used platform across the sector, but rising alternative systems such as Canvas by Instructure, and new platforms eg. Joule by Moodlerooms. SharePoint has rapidly declined. An increase in the number of institutions using open learning platforms such as FutureLearn and Blackboard’s Open Education system. Evaluation activity in reviewing VLE provision: conducting reviews over the last two years. TEL services such as lecture capture is the second most commonly reviewed service by all over the last two years.

Support for TELearning tools: e-submission tools are the most common centrally supported
software, ahead of text matching tools such as Turnitin, SafeAssign and Urkund. Formative and summative e-assessment tools both feature in the Top 5, along with asynchronous communication
tools. Adoption of document sharing tools across the sector and the steady rise in the use of lecture
capture tools. Podcasting tools continue to decline in popularity and the new response items electronic exams and learning analytics appear not to be well established at all as institutional services, with only a handful of institutions currently supporting services in these areas.
Social networking, document sharing and blog tools are the common non-centrally supported tools. TEL tools are being used to support module delivery. Blended learning delivery based on the provision of supplementary learning resources remains the most common use of TEL. Only a small number of institutions actually require students to engage in active learning online across all of their programmes of study. Increasing institutional engagement in the delivery of fully online courses, with over half of 2016 respondents now involved. Growing adoption of MOOC platforms by institutions, but less than half of respondents are pursuing open course delivery.
Little change in the range of online services that higher education institutions are optimising for access by mobile devices. Access to course announcements, email services and course materials and learning resources remain the three leading services optimised for mobile devices. Library services, are being optimised. Optimising lecture recordings at the same level as 2014. The most common ways in which institutions are promoting the use of mobile devices are through the establishment of a bring your own device (BYOD) policy and by loaning out devices to staff and students. Funding for mobile learning projects has reduced in scale.
Outsourcing of institutional services grows: student email, e-Portfolio systems, VLEs and staff email. The type of outsourcing model is dependent on the platform being outsourced: Software as a Service (SaaS) cloud-based model for email services, and to use an institutionally managed, externally hosted model for TEL related tools, such as e-Portfolios and the VLE for blended and fully online courses.
National conferences/seminars and internal staff development all remain as key development activities. Increase in the promotion of accreditation activities, in particular for HEA and CMALT
accreditation.
Electronic Management of Assessment (EMA) making the most demand on TEL support teams. Lecture capture and Mobile technologies as well. The demand from Learning Analytics and from distance learning/fully online courses continues to increase. A new entry which might be expected to make more demands in the future is Accessibility; in particular, demands made by changes to the Disabled Students’ Allowance in the English higher education sector.

A number of appendixes: full data, a longitudinal analysis of TELearning over the past years (going back to 2001), questions that were used for the longitudinal analysis.

The report focusing on the case studies from TELearning:
These case studies are a companion to the earlier report mentioned above. The idea is that the case studies enable to probe themes in the data and shed light on TEL trends through the eyes of representative institutions, offering context to the findings of the overall report.
In each of the case studies, the institutions provide answers to the following TELearning sections: used TELearning strategy, TEL drivers, TEL provision, TEL governance and structures, TEL-specific policies, Competition and Markets Authority (CMA) strategy, Teaching Excellence Framework, Distance Learning and Open Learning, and Future challenges. The diversity of institutes interviewed give a good perspective of the TEL landscape within Higher Education in the UK. 

Friday, 17 February 2017

Recognising Fake news, the need for media literacy #digitalliteracy #literacy #education

I was working on a blogpost on books focusing on EdTech people (the woman, the tasks…), but then I opened up YouTube and I saw that president Trump had his first solo press conference.

I guess we can all benefit from Mike Caulfield's ebook (127 page) on web literacy for students (online version) or here for other versions including pdf), a fabulous book with lots of links and useful actions to become (more) web literate (thank you Stephen Downes for bringing it to my attention). 

After watching it, I thought there was a clear need (for me as an avid supporter of education) to refer to initiatives on the topic of real and fake news, because honestly I do not mind if someone calls something fake or real, as long is that statement is followed by clear arguments describing what you think is fake about it, and why. Before doing that, I want to share the reason for this shift in attention.

I love Amerika, for several reasons: where Europe stays divided, the United States have managed to get its nations to work together, while leaving enough federal freedom to adapt specific topics according to individual nation’s believes; I have worked and honestly like to work with Americans (of all backgrounds) and American organisations, truly I am in complete awe of the Bill of Rights, and the way the constitution is securing freedom for all. I know that a goal as ‘freedom for all’ is difficult to attain, but at least it is an openly set vision, put on paper. I mean, I truly respect such strong incentive to promote freedom for all citizens within a legal framework and the will to achieve that freedom. And due to this love for the United States, I felt that Trump is okay. In democratic freedom, the outcome might not be of anyone’s liking, but … history has shown that democratic freedom can swing in a lot of ways and that it this diversity nurtures new ideas and insights along the way.

However, while watching the press conference I got more and more surprised by what was said and how: there were clear discriminatory references, which I do not think befit a President of all the American people. But okay, to each his own and rhetorical styles can differ (wow, can they differ), but the ongoing remark and reference on Fake News that kept coming up as an excuse and used as a non-sequitur at any point during the press conference just got to me. Manipulation has many faces, and only education can help built critical minds that will be able to judge for themselves, and as such be able to distinguish real from fake news. To me, even if you refer to ‘this is fake news’, I want to hear just exactly what you mean: which part of what news is fake and why. Enlighten me would be the general idea.  

Fake news and believing it: status
A Stanford study released in November 2016, concluded that 82% of middle-schoolers couldn’t distinguish between an ad labeled “sponsored content” and a real news story on a website. Which seems to indicate that somewhere we are not addressing media or digital literacy very well. On the reasons why this lack of media literacy is occuring, I like the viewpoint of Crystle Martin who looks at misinformation and warcraft in this article; saying:
Teaching information literacy, the process of determining the quality and source of information, has been an emphasis of the American Association of School Librarians for decades. However, teaching of information literacy in school has declined as the number of librarians in schools has declined.
Luckily, there are some opinions and initiatives on distinguishing between fake and real news. Danah Boyd had another look at the history of media literacy, focusing on the cultural context of information consumption that were created over the last 30 years. Danah shared her conclusions in a blogpost on 17 January 2017, entitled 'Did media literacy backfire?' She concluded that media literacy had backfired, in part as it was built upon assumptions (e.g. only media X, Y and Z deliver real news) which often does not relate to the thinking of groups of people that prefer other news sites A, B and C.  

Danah describes it very well:
Think about how this might play out in communities where the “liberal media” is viewed with disdain as an untrustworthy source of information…or in those where science is seen as contradicting the knowledge of religious people…or where degrees are viewed as a weapon of the elite to justify oppression of working people. Needless to say, not everyone agrees on what makes a trusted source.
The cultural and ethical logic each of us has, is instilled in us from a very early age. This also means we look upon specific thinking as being ‘right’ or ‘wrong’. And to be honest, I do not feel this cultural/ethical mind set will deter all of us from being able to become truly media literate. As long as we talk to people across the board. As long as colliding thoughts fuel a dialogue, we will learn from each other and be able to understand each other in better ways (yes, I am one of those people that think that dialogue helps learning, and results in increased understanding, thank you Socrates).
If this is the case, than we need to do a better job of improving media literacy, including listening to people with other opinions and how they see it. It is a bit like the old days, where the people from the neighborhood go to the pub, the barbershop, or any get together were people with different opinions meet, yet feel appreciated even during heated debates.  

Maha Bali, in her blogpost “Fake news, not your main problem” touches on the difficulty of understanding all levels of the reports provided in the news and other media. Sometimes it does demand intellectual background (take the Guardian, I often have to look up definitions, historical fragments etc. to understand a full article, it is tough on time and tough to get through, but … sometimes I think it is worth the effort). Maha Bali is a prolific, and very knowledgeable researcher/educator. She touches on the philosophical implication of ‘post-truth’ and if you are interested, her thesis subject on critical thinking (which she refers to in her blogpost) will probably be a wonderful read (too difficult for me). So, both Maha and Danah refer to the personal being not only political, but also coloring each of our personal critical media literacies. 

If media literacy depends on personally developing skills to distinguish fake (with some truth in it) from real (with some lies in it), I gladly refer to some guidelines provided by Stephen Downes, as they are personal. One of the statements I would think is pivotal to distinguish between fake and real news, is understanding that truth is not limited to one or more media papers/sites/organisations, it is about analysing one bit of news at a time. It is not the organisation that is authoritative at all times, it is the single news item that is true or at least as real as it can get. So, here is a list of actions put forward by Stephen Downes on detecting fake news : Trust no one, look for the direct evidence (verification, confirmation, replication, falsification), avoid error (with major sources of error being: prediction, relevance, precision, perspective), take names (based on trust, evidence and errors), and as a final rule he suggests to diversify in sources (which I really believe in, the pub analogy). 

Another personal take on detecting fake news comes from Tim O'Reilly who describes a personal story, and while doing so he sheds some light on how an algorithm might be involved. 

Thinking about algorithms, you can also turn to some fake news detectors:

The BS detector: a fabulous extension to the Mozilla browser. Looks at extreme bias, conspiracy theory, junk science, hate group, clickbait, rumor mill… http://bsdetector.tech/

Snopes: started out as a website focused on detecting urban legends, and turned into an amazing fact checking website (amazing as you can follow the process of how they look at a specific item and then decide whether it is fake).  ( http://www.snopes.com/

And finally, for those who like to become practical asap: a lesson plan on fake news provided by KQED http://ww2.kqed.org/lowdown/wp-content/uploads/sites/26/2016/12/Fake-news-lesson-plan.pdf

In my view, the increase in accepting the idea of fake news is related to the increased divide within society. So, in a way I agree with Danah Boyd: we read and agree with specific people and news sources, and so we filter our sources to those people and media. Seldom do we read up on sources from media we do not agree with, or people we disagree with. It used to be different, as discussions around specific topics were discussed in our community, with a mix of ideas and preferences.
So maybe media literacy could be done on a community level, where everyone gets together and shares their opinion on certain topics. We recreate the local pub or cafĂ©, where everyone meets and gets into arguments on what they believe (or not). Media literacy – to me – is about embracing diversity of opinion, listening, seeing the arguments from the other side and … making up your own mind again.

So, coming back to president Trumps referencing to fake news. In terms of increasing media literacy, I do not have a problem with referencing to something that is seen as fake news, I do have a problem with that fact not being explained: what is fake about it? Why? And again, with saying that, I mean a real explanation, not simply repeating ‘this is fake news. It is. I tell you it is’ (feel free to imagine the tone of voice that such a sentence might be delivered in), now give me the facts, because I do want to know why you or anyone else is labeling something as true or false.

Wednesday, 15 February 2017

How do Instructional Designers support and add to teacher knowledge

As online learning becomes more known, the quality of the delivered online materials become more essential, as learners can (partly) decide which courses they will follow based on the quality of the course material. One of the challenges is to give teachers and trainers an idea of how instructional designers can help (IDs are schooled in online learning options) and what instructional designers can bring to the interdisciplinary learning/teaching team (a broader online and blended learning knowledge, specifically aimed at online or blended interactions, this relies on specific theoretical frameworks that facilitate practical implementations). So, being asked by EIT InnoEnergy to provide an overview of why Instructional Designers are an important Human Resource profile to ensure high quality online or digital learning material, I put together this brief presentation. The slides are text rich so course partners (SELECT) can have another look after the presentation and an ongoing conversation with local Instructional Designers might be started.

In the meantime I am continuing the inspiring work on the Instructional Design Variation Matrix (a practical guide for Instructional Designers, a bit of an extended job aid).

(picture: deeply thinking teachers from KTH Sweden, Polito Italy, UPC Spain, IST Portugal, Aalto Uni Finland listening to online learning experiences at InnoEnergy SELECT kick-off meeting)



Friday, 20 January 2017

Tips for a PhD defense or viva #phd

It is with quite some pleasure that I was awarded the PhD in Educational Technologies last week.

The UK version of a PhD defense is called a Viva, which resembles a closed oral examination (open book) with one external examiner (connected to another University than the one you are at) and an internal examiner (affiliated to your own University, but with whom nor yourself, nor your supervisors have co-authored a paper – so not closely professionally related). In addition to that, you have one observer (normally that is one of your supervisors, she or he will take notes on what is said, and possible recommendations) and a chair (Doug Clow, who explains all the details of the viva and who sees to it that everyone stays hydrated and in an objective state of mind). In my case the external examiner was Neil Morris (Leeds University), the internal examiner was Allison Littlejohn (The Open University, UK). The external examiner usually leads the questioning, which was also the case in my viva. Btw the central question to my PhD thesis was 'what characterizes the informal, self-directed learning of experienced adult, online learners engaged in individual or social learning using any device to follow a FutureLearn MOOC'. It resulted in a conceptual framework for informal self-directed learning, using a method that provided the voices (experiences) of the learners to come through, as such providing a theory from the ground up (in most cases a framework starts from theory, providing a top down dynamic to come to the conclusions). A draft version of the thesis can be read here. The picture shows my two supervisors (Mike Sharples and Agnes Kukulska-Hulme) and Rebecca Ferguson (who was kind enough to be my main examiner during the mock viva) and my wonderful colleagues Janesh Sanzgiri, Jenna Mittelmeier and Garron Hilaire.

The questions started off mildly (with a fair question, which aims at making you feel comfortable, so along the lines of: briefly describe your research, why were you interested in the topic you investigated). From there the questions tend to become more complex and they tend to demand a more in-depth answer. Normally the questions will start at the beginning of your thesis, and consist of overall (e.g. how did you select your literature) as well as very detailed questions (why did you select only that fragment here) which the examiner found either of interest, confusing, or lacking. This means you really need to understand why you did what you did, throughout your thesis.
These are some of the questions I got, with some additional information:  
  • How do your research questions follow from your literature review? During preparation I linked all of my research questions to the most influential paper I mentioned in my literature review. This is also handy for other literature related questions, as you memorise core papers and their subsequent authors.
  • Which element of your findings gave rise to the most poignant discussion; and can you list the main authors for that discussion reflecting on that part of your findings? Why did you limit yourself to these authors for the discussion on that part of X findings? I can tell you, this was a tough question. It means you relate the literature of your literature review and use some of those papers to fuel the scientific discussions on your findings taking into account what the literature already pointed to, as opposed to what your findings show to be different (or similar, as you will most likely find that your findings have commonalities as well as differences with prior research).   
  • What is the relation between the research of your pilot study and the main study? In my case the pilot study had different research questions (and sub-questions) than the main study, this had to be explained, and this had an effect on the findings. This change resulted from the qualitative, exploratory starting point of my study, and the resulting findings from the pilot which urged me to rephrase the research questions of my main study a bit.
  • Is there a theory runs through your investigation, and has an effect on the literature you choose to focus on, the methodology, and research instruments? In my case that was socio-constructivism, briefly: one of the theories I used (connected to the pedagogical design of FutureLearn) is Laurillards conversational framework, specifically the informal conversational framework, which is related to the socio-constructivist view of the world. Additionally, I choose to use Charmaz’s constructing Grounded Theory approach, which also is deeply embedded in the socio-constructivist heritage, and I used multiple learner voices to look at emerging codes, categories and concepts coming from multiple viewpoints (as I used multiple data sources provided to me over time by the participants in my study – participants were asked to self-report their learning through learning logs, sent at different moments throughout their learning experience with FutureLearn MOOCs.
  • Questions could also be limited in scope, for instance: what is your definition of socio-constructivism? Prepare core definitions that are key to your thesis.
  • How did your research questions guide your coding? Tough one, as there is a tension between qualitative research which starts from the concept of no-assumption, to research questions inevitably guiding codes (e.g. codes related to the sub-question of technology for learning).
  • Or considering one area of my findings: what type of definition are you using for social learning? And how does it differ from other social learning definitions? In my case, I used social learning as it is defined by Laurillard, which fits FutureLearn, and is based on the notion of Socratic dialogue, which means it involves at least two active people. This stands in contrast with for instance Bandura (who goes back to a behaviorist view as well, as Bandura’s definition of social learning can be traced back to Pavlov), where Bandura also sees passive learning (e.g. lurking) also as a form of social learning, as it is still embedded in a the whole of society as the learning environment and is part of observing.
  • Two difficult questions were raised during my mock viva. A mock viva is a sort of general rehearsal for your viva. It usually involves your supervisors, as well as a colleague who wishes you well and wants to strengthen your viva skills. In my case, I head the pleasure of having Rebecca Ferguson as my mock viva examiner and she is fabulous! I also used some of her tips in preparation for the mock viva, have a look at the top 40 viva questions she listed as important here. One of the questions she asked me was: what is the difference between MOOC learning and other online learning? E.g. active presence of a facilitator, scale, length of course versus length of curriculum, prerequisites, compulsory or not. Another difficult question was: why did not you taken into account the MOOC educators? Where the better answer would have been: I did take them into account educators, but only in the roles in which they were seen by the learners, not in their classical roles as defined by educational institutes.

Some general remarks:
Make sure you know your thesis, and use parts of it when looking for answers to the questions you are getting. I mean, physically point to your thesis, this will buy you some time to find the right answer, and will give you some additional content support.

Look confident and be succinct. This gives the idea of professionalism to your person, a research professionalism. It does not matter if you belief it, just know that you are indeed the expert on that topic, so you can and must be confident.

The questions you get can come from a variety of thoughts: interest in the approach, doubt on what you wrote, or simple trickery to see whether you do really understand what you are doing. This means that at times you might here a question, which prompts an internal voice to say “Hey! But I did do that, or I do have an argument for doing it that way!”, in that case voice your answer and do not be afraid to stick with your thesis, or correct the examiners. Of course, it is essential to always stay polite, also when you are entering a discussion. But really, the examiners are there to strengthen your thesis, so they are in a way trying to let you grasp how you can make your thesis even stronger, and you are the one who is the real expert in what you have investigated, you know the processes you used to get to your main conclusions.