Повідомлення, що надісла(ла)в Itamar Tzadok

The fact that it is inconsistent with the way the Forum works may also mean that there is a bug in the Forum rather than in the Database ... усміхаюсь

I agree that there may be something wrong in the current behavior, but not necessarily what you point out. The 'All participants' is a pseudo group, but a group nonetheless. All the students are by definition members of the 'All participants' group and so should be able able to post and view entries in that group. What may be inconsistent is the fact that the entries of the 'All participants' group are displayed not only in that group but also in every other group, whereas entries of other groups are displayed only in their respective group. усміхаюсь

3. Itamar...I fail to see how you can call that an LO-honestly!  And what you refer to as measuring outcomes well I think you will need a thousand questions in a quiz for that LO....never mind an ILO-there is a difference, in that....the students demo your intentional learning outcomes as well as the peripheral LOs....

Dawn, I call that LO only because it appears in a course syllabus as LO. And we are likely to find more such statements as LOs in course syllabi, than anything else. As should be clear from my post I wouldn't have something like that as LO in my courses. And yes, for what I refer to as measuring outcomes I do have thousands of questions in the question bank by way of variation and the quizzes are composed of random subsets to the effect that each student has a unique set of questions in each quiz. And this requires a lot of work on my part. But then I can assign weekly quizzes and then a final exam from the same question bank so that the final assessment will have exactly the same type of problems in the same structure and the same tools as what has been practiced weekly. I don't think that the distinction between intentional and peripheral is useful in any way, neither for me nor for my students. In a well structured course there should be no more than 3-4 problem types which the student would learn to solve and would be expected to solve in the final proctored assessment. The description of the problem types and solving strategies is effectively the description of the learning outcomes. Anything else is irrelevant and shouldn't be part of the learning outcomes and assessment although it may be part of the course for enrichment of experience for those 3 students who have extra time. Any mapping of practical learning outcomes to postulated cognitive faculties according to this or that taxonomy is gratuitous. It is not likely to have any effect on the actual performance of the learners.

усміхаюсь

Well, I argue that the "tool" makes the important difference in learning outcomes.

Here is another illustration of this point.

Consider the following learning outcome (an actual LO from an actual course given this term):

recognize and define musical concepts and elements in Western music

This LO is likely to generate a serious misalignment because it doesn't specify 'how' (the "tool" by means of which) it should be demonstrated for evaluation. Is it going to be by multiple-choice one-answer type questions? Or by short answer type questions? These are two different technologies which involve different strategies and require different forms of practice and preparation. If the instructor does not commit to the how, students may find themselves doing multiple-choice quizzes during the course and short-answer questions in the final exam. And this will inevitably affect the learners' performance. And even if the instructor commits to one question type throughout, students may be asked to answer the questions online during the course but with pen&paper in the final exam. And this too will inevitably affect the learners' performance. Note that there is no issue here with how we use the online quiz or the pen&paper one, but rather with instructing with one tool and assessing with another where the tools make an important difference.

The problem with Mazur's premed students' performance was not the lack of PI but the lack of aligned instruction and assessment. While PI may have further advantages, adding a proper textbook for the other conceptual and terminological framework could have eliminated the misalignment and thereby solved the performance issue just as well.

усміхаюсь

Matt, what you find yourself nodding along to, that "technological solutions and activity rubrics are merely the platforms and tools that can help us to implement pedagogical solutions", only makes your position inconsistent and thus untenable, and from there all the difficulties arise.

The technological platform is an integral part of every context (or situation) of what we do. It conditions the context to such extent that for all practical purposes different platforms constitute different contexts even in apparently similar problem domains. There is no more point in asking how one-post-and-two-comments in online discussion forums would look in a face-to-face classroom, than in asking how light would look in dark. It would not. It cannot. Online and offline discussions are distinct contexts, each one make use of things that do not come into play in the other. And while some aspects of the end result may seem similar in some abstract level, the particulars remain different in important aspects which depend on the particular ways in which they are constructed. Moving between different contexts of an apparently similar problem domain is not a trivial task but rather a problem domain of itself.

The crux of this matter is that the hidden false premise you share with many other educators, namely that the technology is just a tool, is probably the main cause for assessment misalignment which results in poor learners performance and baffled educators.

The hidden premise and its effects are evident in a talk titled "Memorizing or Understanding: Are we teaching the right thing?" that was given by Eric Mazur at Queen's University in 2011. In the talk Dr. Mazur describes his bafflement over his premed students' success in solving problems when given in the textbook conceptual and terminological framework, and failure in solving the same (in his view) problems when given in a completely different conceptual and terminological framework that was not covered in the textbook or in class. He proceeds to make a couple of problematic distinctions. First, he distinguishes between the textbook description and the other description, as conventional vs. conceptual. But of course, neither framework is more or less conventional or conceptual than the other. These are different languages which depict the presumed same physical reality in different terms and concepts. Then Dr. Mazur tries to explain the learners' performance by the distinction that appears in the title of the talk, namely, memorizing vs. understanding. And the explanation is that the students were successful in solving the "conventional" problems because they memorized the textbook strategies for solving such problems, but unsuccessful in solving the "conceptual" problems because they did not understand the concepts. Of course, an unbiased reader who is well-versed in the common learning taxonomies would immediately object that acquiring a strategy for problem solving is hardly memorization and should rather be characterized as the higher cognitive faculty of application. So the distinction doesn't work from the outset. And then not instructing the problems in the other conceptual framework but expecting the students to somehow master it, is hardly a problem with the students' understanding (whatever understanding is). It is as absurd as saying that a non-French speaker who wishes to buy a baguette doesn't understand what he wants to buy just because the storekeeper in the French village looked at him in puzzlement when he requested one in English.

Dr. Mazur also describes how he added to the course, instruction in problem solving in the other framework (in the form of PI) and how performance in the assessment improved as a result. Dr. Mazur then concludes: "So better understanding leads to better problem solving!". But he is mislead by his own faulty distinctions. The conclusion should rather be: "Aligned instruction and assessment results in aligned performance!".

So, with respect to the false premise, you are in a highly distinguished company. But the premise is still false and it only generates confusion and bafflement where there shouldn't be.

Here is the talk: http://www.queensu.ca/ctl/resources/videos/mazur.html. Highly entertaining. Enjoy!

усміхаюсь

Let's take the term 'philosophy' and all its cognates out of this discussion then.

Let's examine what you say you ought to do.

... if I want learners to understand math in such a way as to apply it in their lives I will need to have them learn math in a way that applies to their lives.
If you want learners to apply math in their lives, you need to have them apply math in their lives. Why do you need to talk about understanding as if it has some general notion that applies to everyone, especially when you reject any sense of objectivity? Context goes all the way down to the personal level. Can you really know if and how I understand something, or are you just judging my "understanding" by whether I was able to demonstrate the execution of one or more specific tasks according to a certain set of criteria? If the latter, then tell me what I need to do and how I should assess my performance, demonstrate to me how you do it, and be around to offer help if I need any. 


If I want learners to write in order to effectively communicate their ideas to others, then I must teach them to write by having them communicate their ideas to other.

Yet another example of the same. Since you reject objecitivity, 'effectively communicate' is meaningless and hence useless as a learning directive until you put it in a well defined context and give a detailed operational definition of the communication that needs to be demonstrated, when it is effective and when it is not. The details should be sufficient to allow the average learner to self assess the effectiveness of his/her communication. For instance, you can say that if one gets 4 likes on a forum post, that's effective communication. Now the learner can do all kinds of things in order to get those 4 likes, well beyond your prescribed textbook or class notes. That may include bribing classmates. Are you ok with that? Maybe not, but it may be something you need to account for if you want learners to understand effective communication in such a way as to apply it in their lives.

усміхаюсь