Yishay Mor  
helploginprintemail  
 
 
KaleidoscopePeopleYishBlog
yish
blog index

learning

health warning
[education, learning]

Consider what might be termed the quiet desperation syndrome, a disease that attacks the nervous systems of doctoral students. Students who are afflicted begin with a method, "I want to do a qualitative study," "I want to do a MANOVA," and then cast about for a question.

Standards for Qualitative (and Quantitative) Research: A Prolegomenon. Educational Researcher, (19)4:2-9, 1990. URL
posted by Yishay Mor on Tuesday 19th, June 2007 (16:45) - comments (0) - permanent link


openlearn talk @ the London Knowledge Lab
[learning, learning technology, open content, open research]

openlearn is a massive effort from the UK open university to make their course materials free and open on the web. You can also remix and upload your versions. See the openlearn talk @ the London Knowledge Lab.

Also, check out the OpenLean2007 conference (deadline coming up!). Free open content & a free open lunch.

(Digg the video)

posted by Yishay Mor on Sunday 20th, May 2007 (11:20) - comments (0) - permanent link


how to ... become an internet folk hero
[learning, open content, web2.0]

this is beautiful. Jessamyn is a librarian. One day she decides to try Ubuntu. She has so much fun, she makes a little movie and YouTubes it. She wakes up the next morning to find that she's been crowned an internet folk hero.


Hey, even Ubuntu called.
posted by Yishay Mor on Friday 11th, May 2007 (17:59) - comments (0) - permanent link


Tim O'Reilly on Web 2.0 and Education
[education, learning, school, so-so, web2.0]

Steve Hargadon  interviews Mr. Web2.0 on what it means for education. What could be more appropriate than posting the podcast on his excelent blog?

| digg story

posted by Yishay Mor on Thursday 3rd, May 2007 (10:21) - comments (0) - permanent link


Aha! So that's what it is
[learning, open content, publications]

I just love brain science:*
People sometimes solve problems with a unique process called insight, accompanied by an “Aha!” experience. It has long been unclear whether different cognitive and neural processes lead to insight versus noninsight solutions, or if solutions differ only in subsequent subjective feeling. [...] Functional magnetic resonance imaging (Experiment 1) revealed increased activity in the right hemisphere anterior superior temporal gyrus for insight relative to noninsight solutions. The same region was active during initial solving efforts. Scalp electroencephalogram recordings (Experiment 2) revealed a sudden burst of high-frequency (gamma-band) neural activity in the same area beginning 0.3 s prior to insight solutions. This right anterior temporal area is associated with making connections across distantly related information during comprehension. Although all problem solving relies on a largely shared cortical network, the sudden flash of insight occurs when solvers engage distinct neural and cognitive processes that allow them to see connections that previously eluded them.
First, I love their way of defining insight. Second, I'm amazed by the way they measure the moment to an accuracy of 0.3 seconds. But the best is how they show that insight is related to finding lateral connection - using a lateral connection problem set!

*: 2004Neural activity when people solve verbal problems with insight. PLoS Biology, (2)4:500--510.
posted by Yishay Mor on Wednesday 11th, April 2007 (13:56) - comments (0) - permanent link


Clive Thompson on Radical Transparency
[education, learning, open source, philosophy, technology]

Clive Thompson is researching for a Wired article on Radical transparency,  and what better way to do it than post a note on his blog asking for input, or, as he puts it tapping the hivemind:

Normally, I don't post about magazine assignments I'm working on -- because the editors want to keep it secret. But now I'm researching a piece for Wired magazine, and the editors have actually asked me to talk about it openly. That's because the subject of the piece is "Radical Transparency". And, in fact, I'd like your input in writing it.

The piece was originally triggered by a few postings on the blog of Wired editor-in-chief Chris Anderson, and the thesis is simple: In today's ultranetworked online world, you can accomplish more by being insanely open about everything you're doing. Indeed, in many cases it's a superbad idea to operate in highly secret mode -- because secrets get leaked anyway, looking like a paranoid freak is a bad thing, and most importantly, you're missing out on opportunities to harness the deeply creative power of an open world.

Interestingly, much of the discussion refers to scientific process, which seems to tie in nicely with Kaleidoscope's notion of an open research community.

The thing about openness, is to know where it works, where it doesn't and how to tell the two apart. Its just like its absolutely great to be radically transparent with your spouse, but not always a great idea with your mother in law. But then, Clive means radical. Open to all. Again, sometimes, for some things, its great. As the LA times realized, it doesn't work so well for writing editorials. Then again, maybe it could - if you carefully designed the right technology and the right social practices to use it. 


posted by Yishay Mor on Wednesday 17th, January 2007 (12:12) - comments (0) - permanent link


Do avatars deam of human rights?
[games, learning, philosophy, technology]

The Milgram experiment always sends a shiver down my spine. I say it should be a part of any national curriculum. A reminder of what we're all capable of. Luckily, you can't do that kind of thing any more. Well, at least not to humans.

A new study replicated the Milgram experiment with Avatars. The results are.. creepy. Sorry, I can't find any better word. Just look at the videos. (I shouldn't say that, you should read the paper). What gives me the creeps is not the fact that people relate to Avatars in much the same way they react to humans, although I'll get back to that soon. It's just watching a human administer the electric shock, even if he's sending it to an avatar. The Horror. The Horror.

This sheds a new light on the potential of interactive narrative environments (such as 'Façade') for learning. If we react to avatars as if they were humans, then their influence on us - for good and for bad - could be similar. We would pay attention more to an avatar we trust and respect, be offended by their insults, and reflect on moral dilemmas they present us.

But this also puts ideas such as human rights for robots in a new perspective. No, I haven't gone bonkers. I'm not anthropomorphizing Aibo and Sonic the hedgehog. Its us humans I'm worried about. Our experiences have a conditioning effect. If you get used to being cruel to avatars, and, at some subliminal level, you do not differentiate emotionally between avatars and humans, do you risk loosing your sensitivity to human suffering?

 

 


(hat tip to Rough Type)

posted by Yishay Mor on Thursday 4th, January 2007 (12:57) - comments (3) - permanent link


two new papers
[learning, publications, technology]

Designing to see and share structure in number sequences. the International Journal for Technology in Mathematics Education, (13)2:65-78, 2006.  PDF

Design approaches in technology enhanced learning. Interactive Learning Environments, Taylor & Francis, in press.  PDF


posted by Yishay Mor on Thursday 5th, October 2006 (22:13) - comments (0) - permanent link


Good or bad?
[education, learning, technology]

Google has launched its Literacy Project. Is this a noble contribution to human welfare, or a corporate attempt to dominate learning?
My 5 minute test suggests that it is niether, and not much in general. It is simply a new front end on existing services. In fact, its quite pathetic - somthing like an exersice in highschool HTML 101: create a web page which looks like an education portal using standard education services.

Try it out, put some 'uneducational' keywords in the search boxes and see what you get: the same 'ol internet, served by google.
posted by Yishay Mor on Thursday 5th, October 2006 (10:28) - comments (0) - permanent link


Kirschner, Sweller and Clark
[education, learning, philosophy]

Chris Brannigan from Caspian has pointed me to this paper[1]

Chris says this paper has been causing a stir in the EdTech blogsphere:
... with its provocative title and some of the criticism of the paper has actually avoided its main points. Aside from that, it links to some of the conversations that we were having and I have had with other researchers interested in learning based games. We have most often avoided the pure ‘sandbox’ approach because of the very real practical barriers to use within a classroom lesson situation. However, I am interested in the implications of this research and the work of others like Van Joolingen, on the actual learning effectiveness of this approach. What does this mean for us in the design of learning based games?
The on-line version Chris sent me is a not-for quote draft, so please see my comments here as my personal impressions from that text - not as a review of the published paper. (this is, after all, a blog). The question at hand is the debate between constructivist and instructionist approaches to education. 
On one side of this argument are those advocating the hypothesis that people learn best in an unguided or minimally guided environment, generally defined as one in which learners, rather than being presented with essential information, must discover or construct essential information for themselves (e.g. Bruner, 1961; Papert, 1980; Steffe & Gale, 1995). On the other side are those suggesting that novice learners should be provided with direct instructional guidance on the concepts and procedures required by a particular discipline and should not be left to discover those procedures by themselves (e.g. Cronbach & Snow, 1977; Klahr & Nigam, 2004; Mayer, 2004; Shulman & Keisler, 1966; Sweller, 2003). Direct instructional guidance is defined as providing information that fully explains the concepts and procedures that students are required to learn as well as learning strategy support that is compatible with human cognitive architecture. Learning, in turn, is defined as a change in long-term memory.
The main argument is that:
After a half century of advocacy associated with instruction using minimal guidance, it appears that there is no body of research supporting the technique. In so far as there is any evidence from controlled studies, it almost uniformly supports direct, strong instructional guidance rather than constructivist-based minimal guidance during the instruction of novice to intermediate learners. Even for students with considerable prior knowledge, strong guidance while learning is most often found to be equally effective as unguided approaches. Not only is unguided instruction normally less effective, there is evidence that it may have negative results when students acquire misconceptions or incomplete and/or disorganized knowledge.

I can where this comes from. Constructivist / problem based theory is often used as an excuse for what Dennis Hayes and others call the FoFo educational methodology (F... off and find out). I must say this matches my own experience. I can't forget the time when I was asked in a basic research methods course to "let's think about the meaning of 'primary sources'. umm. yes. why don't you turn to the person next to you and discuss this question'. My problem is that this isn't constructivism: you don't 'social construct' dictionary definitions. This is simply bad teaching. It has nothing to do with theory. If you ever saw Papert work with kids you would know that constructionism is anything but minimal guidance. The key issue is embeddedness / situatedness, not quality of guidance. Lazy teaching is bad, no matter what your philosophy is.
Putting aside the interpretation and implementation of theory, one can question its fundamental grounding. Here, the basis for critique is the psychological framework of cognitive architecture. I must begin with a disclaimer: the last time I did any serious reading of cognitive psychology was over 10 years ago. My encounter with cognitive architecture comes primarily from an AI perspective.  Yet the propositions here seem frightfully familiar. As presented, this appears to be a functionalist, box-diagram model of the human mind. For example, it stresses the notion of short-term and long-term memory, and sees learning as the transfer of knowledge from one to the other.
Furthermore, that working memory load does not contribute to the accumulation of knowledge in long-term memory because while working memory is being used to search for problem solutions, it is not available and cannot be used to learn. Indeed, it is possible to search for extended periods of time with quite minimal alterations to long-term memory (e.g., see Sweller, Mawer, & Howe, 1982).
I call this a box-diagram model, because I believe that such theories are driven by our quest to have a model which can be nicely drawn on paper. You draw a box for "short term memory", another for "long term memory", an arrow between them labeled "learning", and bob's your uncle. The only problem, of course, is that the human mind doesn't work that way. Take the experiments from the visual cognition lab, for example. When you observe carefully you find that perception, information, and knowledge are all one big spaghetti pile. Well, not quite, but several spaghetti piles with a lot of noodles weaving their way from one to the other. No pretty block diagrams. And this is just one example: neurological studies of learning (e.g. Zwaan & Taylor, Pecher & Zwaan) show similar results. The flow from experience to abstract knowledge is gradual, continuous, non-linear, and most importantly: situated, embedded.
James Kroger [2] leaves no room for doubt:
Ruchkin et al. present compelling evidence that information in working memory, rather than existing in a special purpose buffer distinct from the neural substrates specialized for perceiving that kind of information, is a state of activation in those same substrates under the control of frontal cortex. As the authors note, this is a more parsimonious scheme than duplicate representation architectures for the perception and storage of the myriad kinds of information we deal with. The view that attention activates representations, even in low-level visual areas, has also been demonstrated for nonverbal information by Kastner et al. (1999) and others, and the control of posterior representations by frontal cortex was embodied in our computational model of working memory (Kroger et al. 1998).
This leads to situated models of learning: action situated, such as Lakoff & Núñez's embodied mathematics, and socially situated, such as Rogoff's apprenticeship model. Both fit well with constructivism, constructionism and problem based learning. Both have nothing to do with unguidedness.
Many studies stress the critical role of the teacher in constructivist education. Proponents of this approach would explain that for exploration to be effective, it must be guided. Otherwise, it is a random walk. In fact, while instructionist learning is predictable, being pre-arranged according to schedule, it needs much less guidance than user-directed learning, where the teachers' knowledge, experience and judgment guard the learner from wandering aimlessly. Yet, if one's starting point equates constructivism with negligence, it is possible to reach concussions such as:
Because students learn so little from a constructivist approach, most teachers who attempt to implement classroom-based constructivist instruction end up providing students with considerable guidance.
Or make discoveries such as:
Hardiman, Pollatsek, and Weil (1986) and Brown and Campione (1994) noted that when students learn science in classrooms with pure-discovery methods and minimal feedback, they often become lost, frustrated, and their confusion can lead to misconceptions.
Of course students need guidance. The concussion from this 'evidence' is simple: for discovery learning to be effective, it requires careful attention from a good teacher. If there's a critique here, it should be that teachers are not properly trained to apply the theory.  Again, no theory should not be an excuse for poor teaching. I can easily come up with an argument against instuctionalism:
If a teacher instructs non-sense, then learners are likely to learn nonsense.
In fact, I had a teacher who thought us, very methodically and didactically, that the river Seine flows 50 meters uphill, and that every liquid contains water.
The authors seal their argument with a call for controlled experiments. They argue that
Controlled experiments almost uniformly indicate that when dealing with novel information, learners should be explicitly shown what to do and how to do it.
Now there's a problem with controlled experiments in education. First, you need something to control. You need a situation where all but one variable is fixed. That is easy to achieve in a tightly engineered situation, but not so much in a n open ended one. Then, you need something to measure. Some of my favorite examples of teaching and learning are cases of: 'we planned to teach X, but then something unexpected happened, and we ended up learning X*'. In a controlled experiment, that doesn't cut the mustard. You simply can't say 'the Anova sucks, but they learned something else.'
In a way, this brings us back to the idea of cognitive architecture, and the AI link. A lot of psychological theories in the 80's and 90's where inspired by computers, but in a bad way. Computer science had some neat deep ideas such as statistical learning theory, computational complexity and connectionism. Unfortunately, these required significant investment to grasp. Instead, many people preferred to pick up some cozy metaphors and disguise them as theory. The mind was described in terms of cpu, memory and I/O units. In fact, this is as good a scientific theory as describing it in terms of a pot, a carrot and some fresh coriander. Sure, its an interesting metaphor, but it doesn't really explain anything, nor is it grounded in any substance.
The problem with controlled experiments is that they are using imitation natural science where it doesn't make any sense. It would, if the mind was a simple deterministic machine that needed to perform simple tasks. Then you could tweak parameters independently and measure variation of performance. If this is what learning is about, I think I'll look for a more interesting line of business.

The text concludes with examples of a couple of effective techniques (or, what I would call patterns). The first is the 'Worked example', the second a 'process worksheet'. Both seem very good patterns for providing procedural knowledge. Such tools should be pre-packaged and provided to the learners when needed. The teachers role is to identify the right moment to provide them, i.e. when the learner acknowledges that she needs a particular bit of procedural knowledge. This reminds me of The Matrix. When Trinity needs to fly a helicopter, the operator downloads a program into her mind. This isn't problem solving - Trinity needed to acknowledge that the helicopter was the tool to use, and needed to figure out what to use it for. The techne of managing helicopter controls should not be confused with knowledge of how to save the world.
By the way, in weblabs we developed what we called active worksheets and pictutorials - which  very similar techniques, and we thought we were constructionist!



1:  
Paul A. Kirschner and John Sweller and Richard E. Clark. Educational Psychologist (2006)

2:  James K. Kroger (2003) Long-term memories, features, and novelty Behavior and Brain Sciences


posted by Yishay Mor on Wednesday 9th, August 2006 (04:44) - comments (2) - permanent link