What’s next for educational software and technology?

Innovation; Online Learning; Research; Technology
This article was originally published by George Siemens on Tuesday, August 13, 2013 and can be located here: http://www.elearnspace.org/blog/, a blog that George regularly contributes to.

Bastow has received permission for reprinting this article. It may contain links that may not be relevant to teachers and principals in Australia, however the principles in the article are still fundamental. ​
_________________________________________________________________


Most educational software instantiates physical learning spaces. This is reflected in learning management systems, virtual classrooms, and interactive whiteboards. Essentially, we use new tools to do the work of old tools and largely fail, at first, to identify and advance the unique affordances of new technology.

The internet fragments information and antagonises pre-established information structures. Albums, books, and courses, for example, have a hard time existing as coherent wholes in a network. When individuals have access to tools for creating, improving, evaluating, and sharing content, centralised structures fail. This has been a core argument that Stephen and I have been making since our first open online course, CCK08.

Early on in CCK08, we discovered that central discussion forums and learning content were augmented, even replaced, by distributed interactions. Instead of creating central spaces of learning, our focus (and reflected in Stephen’s grasshopper software) in subsequent courses turned to encouraging students to own their own learning spaces. The course, as a result, became more about aggregating distributed interactions than about forcing learners into our spaces. A Domain of One’s Own is another great example of promoting learner identity self-management and learner’s ownership of space.

The challenge with fragmentation is that learning itself is a coherence forming process. Even when we get information from a variety of sources, we still go through a process of putting these concepts in relation to others. This process is not unlike how we recognise a person’s face: many regions of the brain are involved and the process of “binding” distributed processes is what generates recognition. If connections don’t form, learning doesn’t happen and knowledge isn’t generated.

Educational software in common use today assumes that structure exists a priori. This structure might in the form of a textbook, course content, or a series of lectures. Learners are then expected to duplicate the knowledge of the instructor (hence the notion of knowledge transfer). This mindset is an artefact of physical spaces of learning. When teaching happened only in classrooms, students had to be brought together into a set physical space. It wasn’t practical, or cost effective, for learners to cut up textbooks into individual images and small text elements and encourage learners to remix them with other text and resources.

Physical space and physical structure of information determined suitable pedagogies.

The limitations of physical space have diminished. Information is generally in digital form now, even in traditional classrooms. Contrived structures of coherence are no longer needed in advance of learner engagement with content. Instead, something along the lines of Wolfram’s notion of computational knowledge or schema on read seems more sensible today.

I’ll take it a few steps further: in the near future, all learning will be boundary-less. All learning content will be computational, not contrived or pre structured. All learning will be granular, with coherence formed by individual learners. Contrived systems, such as teaching, curriculum, content, accreditation, will be replaced, or at minimum, by models based on complexity and emergence (with a bit of chaos thrown in for good measure). Perhaps it will be something like, and excuse the cheesy name, learnometer. Technical systems will become another node in our overall cognitive systems. Call it embodied cognition. Or distributed cognition. Or appeal to Latour’s emphasis that technical nodes in knowledge system can be non-human and actually be seen as equal to human nodes. I’ve used the term connectivism to describe this. Others have emphasised networked knowledge and combinatorial creativity.

The terminology doesn’t really matter.

The big idea is that learning and knowledge are networked, not sequential and hierarchical. Systems that foster learning, especially in periods of complexity and continual changes to the human knowledge base, must be aligned with this networked model. In the short term, hierarchical and structured models may still succeed. In the long term, and I’m thinking in terms of a decade or so, learning systems must be modelled on the attributes of networked information, reflect end user control, take advantage of connective/collective social activity, treat technical systems as co-sense making agents to human cognition, make use of data in automated and guided decision making, and serve the creative and innovation needs of a society (actually, human race) facing big problems.

George Siemens is an educator and researcher on learning, technology, networks, analytics, and openness in education. He is the author of Knowing Knowledge, an exploration of how the context and characteristics of knowledge have changed and what it means to organisations today.

He has delivered keynote addresses in more than 30 countries on the influence of technology and media organisations, education, and society. His work has been profiled in national and international radio, television and newspapers (including New York Times).

_________________________________________________________________

George will be presenting at Bastow on the importance of understanding how students learn online, and how teachers and educational leaders can use data to inform decision to maximise the learning experience for their students.