Consider what might be termed the quiet desperation syndrome, a disease that attacks the nervous systems of doctoral students. Students who are afflicted begin with a method, "I want to do a qualitative study," "I want to do a MANOVA," and then cast about for a question.
One of the hot issues that came up during the Vision document discussions (also here) was the idea of Open Research.
It's FP7 season, and time to put our money where our mouth is (sorry for the Americanism).
I'm sure many people in the network are working on proposals. Why not have an open process for this?
Now, you may think this is crazy, after all - we're in competition. But I say - think again. I remember back in the days of the Web1.0 gold rush, I had an idea and wanted to talk to some venture capitalist about it. I asked him to sign a non-disclosure argeement. He said "If the only thing you have going for you is that no one knows what you're thinking about, then don't bother. Either someone else is thinking the same, or they'll copy and better you the first time you expose it"
Here's a theory: the product of an open process can never be of a lesser quality than the product of a similar closed process. So if we open up, share our ideas, we can:
- learn from each other.
- form new teams.
- focus on our relative advantages.
As for myself, I'm party to two efforts. Since I'm not leading either, I can't say too much without my partners' consent. But I can say that one follows up on weblabs and playground, the other follows up on the learning patterns project .
Clive Thompson is researching for a Wired article on Radical transparency, and what better way to do it than post a note on his blog asking for input, or, as he puts it tapping the hivemind:
Normally, I don't post about magazine assignments I'm working on --
because the editors want to keep it secret. But now I'm researching a
piece for Wired magazine, and the editors have actually asked me
to talk about it openly. That's because the subject of the piece is
"Radical Transparency". And, in fact, I'd like your input in writing it.
The piece was originally triggered by a few postings on the blog of Wired editor-in-chief Chris Anderson, and
the thesis is simple: In today's ultranetworked online world, you can
accomplish more by being insanely open about everything you're doing.
Indeed, in many cases it's a superbad idea to operate in highly secret
mode -- because secrets get leaked anyway, looking like a paranoid
freak is a bad thing, and most importantly, you're missing out on
opportunities to harness the deeply creative power of an open world.
Interestingly, much of the discussion refers to scientific process, which seems to tie in nicely with Kaleidoscope's notion of an open research community.
The thing about openness, is to know where it works, where it doesn't and how to tell the two apart. Its just like its absolutely great to be radically transparent with your spouse, but not always a great idea with your mother in law. But then, Clive means radical. Open to all. Again, sometimes, for some things, its great. As the LA times realized, it doesn't work so well for writing editorials. Then again, maybe it could - if you carefully designed the right technology and the right social practices to use it.
Educational content is still too expensive and inaccessible for many
developing countries, whether it is digital or traditional. As
connectivity rates increase dramatically, it makes sense to prepare
digital materials for these newly connected educational institutes,
teachers and young people. There are a number of interesting projects
worldwide to do this, including the Global Text Book project,
aiming to "create a free library of 1,000 electronic textbooks for
students in the developing world". These textbooks will cover areas
typically included in the first two years of undergraduate study - I'm
sure many developed world students will use them too.
Google has launched its Literacy Project. Is this a noble contribution to human welfare, or a corporate attempt to dominate learning?
My 5 minute test suggests that it is niether, and not much in general. It is simply a new front end on existing services. In fact, its quite pathetic - somthing like an exersice in highschool HTML 101: create a web page which looks like an education portal using standard education services.
Try it out, put some 'uneducational' keywords in the search boxes and see what you get: the same 'ol internet, served by google.
Chris says this paper has been causing a stir in the EdTech blogsphere:
... with its provocative title and some of the criticism of the paper has actually avoided its main points. Aside from that, it links to some of the conversations that we were having and I have had with other researchers interested in learning based games. We have most often avoided the pure ‘sandbox’ approach because of the very real practical barriers to use within a classroom lesson situation. However, I am interested in the implications of this research and the work of others like Van Joolingen, on the actual learning effectiveness of this approach. What does this mean for us in the design of learning based games?
The on-line version Chris sent me is a not-for quote draft, so please see my comments here as my personal impressions from that text - not as a review of the published paper. (this is, after all, a blog). The question at hand is the debate between constructivist and instructionist approaches to education.
On one side of this argument are those advocating the hypothesis that people learn best in an unguided or minimally guided environment, generally defined as one in which learners, rather than being presented with essential information, must discover or construct essential information for themselves (e.g. Bruner, 1961; Papert, 1980; Steffe & Gale, 1995). On the other side are those suggesting that novice learners should be provided with direct instructional guidance on the concepts and procedures required by a particular discipline and should not be left to discover those procedures by themselves (e.g. Cronbach & Snow, 1977; Klahr & Nigam, 2004; Mayer, 2004; Shulman & Keisler, 1966; Sweller, 2003). Direct instructional guidance is defined as providing information that fully explains the concepts and procedures that students are required to learn as well as learning strategy support that is compatible with human cognitive architecture. Learning, in turn, is defined as a change in long-term memory.
The main argument is that:
After a half century of advocacy associated with instruction using minimal guidance, it appears that there is no body of research supporting the technique. In so far as there is any evidence from controlled studies, it almost uniformly supports direct, strong instructional guidance rather than constructivist-based minimal guidance during the instruction of novice to intermediate learners. Even for students with considerable prior knowledge, strong guidance while learning is most often found to be equally effective as unguided approaches. Not only is unguided instruction normally less effective, there is evidence that it may have negative results when students acquire misconceptions or incomplete and/or disorganized knowledge.
I can where this comes from. Constructivist / problem based theory is often used as an excuse for what Dennis Hayes and others call the FoFo educational methodology (F... off and find out). I must say this matches my own experience. I can't forget the time when I was asked in a basic research methods course to "let's think about the meaning of 'primary sources'. umm. yes. why don't you turn to the person next to you and discuss this question'. My problem is that this isn't constructivism: you don't 'social construct' dictionary definitions. This is simply bad teaching. It has nothing to do with theory. If you ever saw Papert work with kids you would know that constructionism is anything but minimal guidance. The key issue is embeddedness / situatedness, not quality of guidance. Lazy teaching is bad, no matter what your philosophy is.
Putting aside the interpretation and implementation of theory, one can question its fundamental grounding. Here, the basis for critique is the psychological framework of cognitive architecture. I must begin with a disclaimer: the last time I did any serious reading of cognitive psychology was over 10 years ago. My encounter with cognitive architecture comes primarily from an AI perspective. Yet the propositions here seem frightfully familiar. As presented, this appears to be a functionalist, box-diagram model of the human mind. For example, it stresses the notion of short-term and long-term memory, and sees learning as the transfer of knowledge from one to the other.
Furthermore, that working memory load does not contribute to the accumulation of knowledge in long-term memory because while working memory is being used to search for problem solutions, it is not available and cannot be used to learn. Indeed, it is possible to search for extended periods of time with quite minimal alterations to long-term memory (e.g., see Sweller, Mawer, & Howe, 1982).
I call this a box-diagram model, because I believe that such theories are driven by our quest to have a model which can be nicely drawn on paper. You draw a box for "short term memory", another for "long term memory", an arrow between them labeled "learning", and bob's your uncle. The only problem, of course, is that the human mind doesn't work that way. Take the experiments from the visual cognition lab, for example. When you observe carefully you find that perception, information, and knowledge are all one big spaghetti pile. Well, not quite, but several spaghetti piles with a lot of noodles weaving their way from one to the other. No pretty block diagrams. And this is just one example: neurological studies of learning (e.g. Zwaan & Taylor, Pecher & Zwaan) show similar results. The flow from experience to abstract knowledge is gradual, continuous, non-linear, and most importantly: situated, embedded.
James Kroger  leaves no room for doubt:
Ruchkin et al. present compelling evidence that information in working memory, rather than existing in a special purpose buffer distinct from the neural substrates specialized for perceiving that kind of information, is a state of activation in those same substrates under the control of frontal cortex. As the authors note, this is a more parsimonious scheme than duplicate representation architectures for the perception and storage of the myriad kinds of information we deal with. The view that attention activates representations, even in low-level visual areas, has also been demonstrated for nonverbal information by Kastner et al. (1999) and others, and the control of posterior representations by frontal cortex was embodied in our computational model of working memory (Kroger et al. 1998).
This leads to situated models of learning: action situated, such as Lakoff & Núñez's embodied mathematics, and socially situated, such as Rogoff's apprenticeship model. Both fit well with constructivism, constructionism and problem based learning. Both have nothing to do with unguidedness.
Many studies stress the critical role of the teacher in constructivist education. Proponents of this approach would explain that for exploration to be effective, it must be guided. Otherwise, it is a random walk. In fact, while instructionist learning is predictable, being pre-arranged according to schedule, it needs much less guidance than user-directed learning, where the teachers' knowledge, experience and judgment guard the learner from wandering aimlessly. Yet, if one's starting point equates constructivism with negligence, it is possible to reach concussions such as:
Because students learn so little from a constructivist approach, most teachers who attempt to implement classroom-based constructivist instruction end up providing students with considerable guidance.
Or make discoveries such as:
Hardiman, Pollatsek, and Weil (1986) and Brown and Campione (1994) noted that when students learn science in classrooms with pure-discovery methods and minimal feedback, they often become lost, frustrated, and their confusion can lead to misconceptions.
Of course students need guidance. The concussion from this 'evidence' is simple: for discovery learning to be effective, it requires careful attention from a good teacher. If there's a critique here, it should be that teachers are not properly trained to apply the theory. Again, no theory should not be an excuse for poor teaching. I can easily come up with an argument against instuctionalism:
If a teacher instructs non-sense, then learners are likely to learn nonsense.
In fact, I had a teacher who thought us, very methodically and didactically, that the river Seine flows 50 meters uphill, and that every liquid contains water.
The authors seal their argument with a call for controlled experiments. They argue that
Controlled experiments almost uniformly indicate that when dealing with novel information, learners should be explicitly shown what to do and how to do it.
Now there's a problem with controlled experiments in education. First, you need something to control. You need a situation where all but one variable is fixed. That is easy to achieve in a tightly engineered situation, but not so much in a n open ended one. Then, you need something to measure. Some of my favorite examples of teaching and learning are cases of: 'we planned to teach X, but then something unexpected happened, and we ended up learning X*'. In a controlled experiment, that doesn't cut the mustard. You simply can't say 'the Anova sucks, but they learned something else.'
In a way, this brings us back to the idea of cognitive architecture, and the AI link. A lot of psychological theories in the 80's and 90's where inspired by computers, but in a bad way. Computer science had some neat deep ideas such as statistical learning theory, computational complexity and connectionism. Unfortunately, these required significant investment to grasp. Instead, many people preferred to pick up some cozy metaphors and disguise them as theory. The mind was described in terms of cpu, memory and I/O units. In fact, this is as good a scientific theory as describing it in terms of a pot, a carrot and some fresh coriander. Sure, its an interesting metaphor, but it doesn't really explain anything, nor is it grounded in any substance.
The problem with controlled experiments is that they are using imitation natural science where it doesn't make any sense. It would, if the mind was a simple deterministic machine that needed to perform simple tasks. Then you could tweak parameters independently and measure variation of performance. If this is what learning is about, I think I'll look for a more interesting line of business.
The text concludes with examples of a couple of effective techniques (or, what I would call patterns). The first is the 'Worked example', the second a 'process worksheet'. Both seem very good patterns for providing procedural knowledge. Such tools should be pre-packaged and provided to the learners when needed. The teachers role is to identify the right moment to provide them, i.e. when the learner acknowledges that she needs a particular bit of procedural knowledge. This reminds me of The Matrix. When Trinity needs to fly a helicopter, the operator downloads a program into her mind. This isn't problem solving - Trinity needed to acknowledge that the helicopter was the tool to use, and needed to figure out what to use it for. The techne of managing helicopter controls should not be confused with knowledge of how to save the world.
By the way, in weblabs we developed what we called active worksheets and pictutorials - which very similar techniques, and we thought we were constructionist!