What Do I Know? Yet Another eLearning Blog
Friday, July 14, 2017
Friday, March 10, 2017
Skills and Knowledge for Online/Blended Teaching
Here's a follow-up post to something I wrote five years ago on a taxonomy for an eLearning professional. I put together a rubric that matches that post on a Google Doc on the Skills and Knowledge for Online/Blended Teaching. Like the previous post, it is meant to think about paths for professional development regardless at what level one is working at in eLearning.
Monday, January 30, 2017
Questioning the Dual Channel Theory
I was investigating the dual channel
theory used in Mayer and Moreno (1999). After a review of the literature, it appears
that the theory is currently unreliable.
Long-term and Working Memory
Cognitive
psychology identifies two types of memory: a very limited working memory and a
theoretically unlimited long-term memory. Memories stored in long-term memory
are stored in “hierarchically organized schemas which permit us to treat
multiple sub-elements of information as a single element” (Kalyuga et al.,
1999, p. 351). Working memory, on the other hand, is very limited and can be
overloaded easily unless attached to an existing schema. Thus, Kalyuga et al.
(1999) posit a split-attention effect where some working memory resources must
be devoted to processing the connection between two separate pieces of
information (a diagram and text explanation, for instance). Kalyuga et al.
(1999) indicate the audio presentation of information improves learning by
overcoming the split-attention effect since the diagram and audio explanation
can be presented simultaneously. Mayer and Moreno (1999) confirm the main
premise of Kalyuga et al. through their modality effect. They propose a
“dual-processing model of working memory with separate channels for visual and
auditory processing” (Mayer & Moreno, 1999, p. 359). Both Kayuga et al.
(1999) and Mayer and Moreno (1999) relate their research to previous cognitive
psychological theorists who observed some form of splitting in the working
memory.
Cognitive Psychology Support for Separate Audio/Visual Channels
Several
cognitive psychologists have presented evidence of divisions in working memory.
Paivio (1991) has researched the “dual coding theory (DCT) of memory and
cognition” (p. 255) for decades. Paivio (1991) posits a split in working memory
between verbal, including printed or spoken words, and non-verbal elements,
including visual objects or environmental sounds and other sensorimotor
impressions. Likewise, Penney (1989) observed a difference in short-term
retention of information either seen or heard. Her separate-streams hypothesis,
however, only identifies a difference between the modalities that lasts only
seconds in a sensory store. As Reinwein (2012) identifies, another theorist,
Alan Baddeley, posits a split in working memory between a visuo-spatial
sketchpad that processes symbolic information and a phonological loop that
processes verbal information. Printed information that is read is also
processed through this phonological loop indirectly through an internal
articulation. There is evidence of divisions in working memory. However, as
Reinwein (2012) argues, while there may be analogous connections between the
splits identified in the split-attention effect and the modality effect and the
divisions found by Paivio (1991), Penney
(1989), and Baddeley, none of cognitive psychological models presented directly
supports separate channels for visual and auditory processing.
Questioning the Research Supporting Separate Audio/Visual Channels
Apart
from the lack of support from cognitive psychology for the theories of Kayuga
et al. (1999) and Mayer and Moreno (1999), questions have been raised about the
reliability and validity of their research. Reinwein (2012) questions several
aspects of the research of Kayuga et al. (1999) and Mayer and Moreno (1999). First,
he identifies the lack of direct support from the aforementioned cognitive
psychologists who Kayuga et al. (1999) and Mayer and Moreno (1999) cite. In
fact, Reinwein (2012) indicates that both Kayuga et al. (1999) and Mayer and
Moreno (1999) have “experimentally crossed the variables Modality (visual,
auditory) with Memory (short term, long term)” (p. 27). Second, Reinwein (2012)
performs meta-analysis on the same published research of that another
researcher had previously studied in a meta-analysis. Having corrected for
certain methodological problems, Reinwein (2012) found half the effect size
previously identified by the other meta-analysis. Third, Reinwein (2012)
further halves the effect size when correcting for publication bias, where
studies showing high effect sizes are more likely to be published than studies
showing low effect sizes. Lindow et al. (2011) hypothesize about how the same
publication bias may explain how robust the research underlying the modality
effect appears, at first. They note that almost half of the published studies
supporting the modality effect appear in a single journal, the Journal of Educational Psychology.
Lindow et al. (2011) attempt to replicate the findings of Mayer and Moreno
(1999) by repeating their experiment and fail to find a modality effect. They
hypothesize that, rather than a modality effect, an “auditory recency
hypothesis” (Lindow et al., 2011, p. 232) may explain the effect seen in the
findings of Mayer and Moreno (1999). The auditory recency hypothesis posits
that the final sentence heard is retained significantly longer than when read.
Thus, the length of text used in Mayer and Moreno (1999) may have contributed
to the appearance of a modality effect. According to Lindow et al. (2011), the
questions that have been raised about Mayer and Moreno (1999) and the lack of
replication of their work casts doubt on the validity of their work, and more
confirmation or disconfirmation in future studies is needed.
Conclusion
Serious
questions have been raised about the reliability of the research that indicates
a split between an audio and a visual channel in working memory. If these separate channels do not exist, the Modality Principle is brought under question, and if it turns out the Modality Principle does not exist, then the Reverse Modality Effect would need to be renamed.
References
Kalyuga, S., Chandler, P., &
Sweller, J. (1999). Managing Split-attention and Redundancy in Multimedia
Instruction. Applied Cognitive Psychology,
13, 351-371.
Lindow, S., Fuchs, H. M., Fürstenberg,
A., Kleber, J., Schweppe, J., & Rummer, R. (2011). On the robustness of the
modality effect: Attempting to replicate a basic finding. Zeitschrift für Pädagogische Psychologie, 25(4), 231-243.
Moreno, R. & Mayer, R.E. (1999).
Cognitive Principles of Multimedia Learning: The Role of Modality and
Contiguity. Journal of Educational
Psychology, 91(2), 358-368.
Paivio, A. (1991). Dual Coding Theory:
Retrospect and Current Status. Canadian
Journal of Psychology. 45(3), 255-287.
Penney, C.G. (1989). Modality effects and
the structure of short-term verbal memory. Memory
& Cognition, 17 (4), 398-422.
Reinwein, J. (2012). Does the Modality
Effect Exist? and if So, Which Modality Effect? Journal of Psycholinguistic Research, 41(1), 1-32.
Tuesday, July 5, 2016
Leadership and Technology 2
As mentioned in my last post, the ed tech field is
relatively new and has few well-recognized success criteria. This lack leaves
the field open in particular to suffering from the Dunning-Kruger
effect, where relatively ignorant newcomers can have delusions of grandeur
about their own skill level. A relatively trivial or shallow understanding of
ed tech can lead some to believe that they are more skilled than they actually
are. This attitude can negatively impact student learning and faculty
prioritization of professional development. Faculty suffering from the
Dunning-Kruger effect can also easily spread misinformation and poor practices
to other faculty. Additionally, they may have only a rudimentary understanding
of a leader’s communicated vision but believe they have a full understanding of
the vision and are implementing it successfully.
Thomas and Patricia Reeves, in their article, “Educational Technology Research in a
VUCA World,” produced the following table distinguishing between ed tech
research that focuses on the technology rather than on the pedagogy:
Research focused on
things is what we do
|
Research focused on
problems is what we should do
|
|
|
The list on the left, on things that research tends to focus
on, is driven more from an IT perspective while the list on the right, on
things that the Reeve’s feel research should focus on, is driven more from a
teaching perspective. This is an interesting and potentially fruitful
discussion for ed tech leaders to be having with faculty. However, by shifting
the focus too much to pedagogical concerns, there runs the risk of simplistic
1:1 “solutions” such as statements like “I use Pinterest to engage students,”
especially among those who are suffering from the Dunning-Kruger effect. Rather
than seeing one research area as less important for research than another area,
these two areas should be seen as two sides of the same coin. Thus, specific
technologies should not be separated from the problems they are attempting to
solve. This view has the further advantage of potentially comparing a variety
of technologies and their effects on a single pedagogical problem. This
strategy combats the simplistic 1:1 relationship of technology and “solution.”
Ed tech leadership has an important role to play in promoting
this linking of technology with pedagogical problem-solving. By looking at
technology interventions as pedagogical problem-solving, leaders can decide on
the best approaches to promote based on the available research. York University
struck an educational technology advisory group (an example of distributed
leadership) which researched the university’s potential direction and settled
on a strategy of primarily offering blended learning with some increases in fully
online courses. Other schools have implemented a “flipped classroom”
strategy, providing the technological architecture to allow faculty to have low
barriers of entry to implementing the strategy and robust supports. Promoting a
single vision for technology adoption allows the organization to specialize in
a particular strategy. That strategy can then be properly resourced with
adequate support staff. As well, the pedagogical problems that are addressed by
that technology strategy can be properly understood by faculty through
professional development.
Leadership and Technology 1
I am quite new to the concept of leadership and the various
styles of leadership, so my blog posts will be quite focused on the basics and
me thinking through leadership and technology. I have been a leader in the past
and still consider myself a leader to a certain extent, even though my position
is not a formal leadership position. Currently, I try to lead the faculty that
I work with as well as leading by example; for instance, I try to demonstrate
the proper use of educational technology when presenting to large groups.
There are four primary leadership styles that are often
combined in a number of ways as well as a fifth leadership style that also
appeals to me:
1.
Autocratic:
Autocratic leaders retain all power for themselves. This speeds decision-making
but can lead to an organizational culture that is primarily concerned with
issues of power and status.
2.
Managerial: The
managerial leader is primarily concerned with running the organization smoothly
and may not promote a clear and inspiring vision for the organization.
3.
Democratic:
A democratic leader takes into account the opinions of his or her followers but
feels that the final decision-making authority resides with them. Although
consulted, the followers can lack buy-in to the leader’s decision.
4.
Collaborative:
A collaborative leaders not only consults with his or her followers but also
makes decisions through discussion and democratic decision-making, which
hopefully arrives at a consensus. While this style of leadership increases the
likelihood of buy-in, it can be inefficient.
5.
Servant: A
“leader among equals.” The servant leader seeks to serve his or her
“constituents” and views them as peers and not followers.
I believe I naturally gravitate toward being a servant
leader. Perhaps it’s growing up in a hockey culture where one is expected to
defer to team success and not take individual credit for successes that makes
me attracted to that concept. There may be a team captain and some on the team
may get paid more, but everyone has a role to play that is equally important to
team success. If a fellow team member is a competent professional, there should
be no reason to not see them as equals.
In addition to the above styles, James MacGregor Burns contrasted
two different styles: transactional and transformational. Transactional leaders
see leadership as a series of transactions and may be most closely related to
the managerial leader above. Examples of transactions are rewards, punishments,
reciprocity, and monetary. A transformational leader, on the other hand,
creates a vision and encourages followers to pursue that vision by aligning
that vision to the motivations of the followers. The transformational leader
empowers the followers to pursue fulfillment of the vision. The transactional
and the transformational cannot be completely separated. Without any vision,
the transactional leader is a tedious bureaucrat. Without any management, the
transformational leader is an ungrounded dreamer. I see myself more of a
transactional leader. Any larger “vision” that I may have is too abstract to
communicate clearly and not particularly interesting to me anyway because I am
not a fan of abstraction; the “vision” that I communicate is through example:
learn more and be able to do more, so that you can perform your job function
better.
I see leadership overall as a Venn diagram:
While we associate the transformational leader with having a
vision and the transactional leader with managing resources, I think it is
important to separate out the concept of “charisma” from those two other
functions. Charisma here means the ability to influence others to follow one’s
direction. Since the leader and manager cannot be discretely separated, I think
there is an overlap between the leader and manager. The ideal transformational
leader, I think, is able to combine all three traits. The ideal leader can
present a vision, manage resources, and persuade others to follow the leader’s
direction. I think it is important to separate out charisma because with any
one of the three traits, a person can perform or appear to perform a useful
role in an organization. I have known managers who are able to get by while
having no vision and being poor resource managers, especially in a field as new
as ed tech; however, a charismatic person can succeed by the sheer force of
their personality, persuading their bosses that they are doing well both as
leaders and managers. This person in the diagram has been labelled the “charlatan,”
a person who is both an ineffective manager and leader and only appears to be
doing a successful job based on their ability to “talk a good game.” In a field
such as ed tech, which has few, clear, well-known success metrics, a charlatan
can easily dazzle his or her superiors by making the most banal achievement
seem extraordinary and the time taken by employees to reach the banal achievement
to be an efficient use of resources. Charisma is also important to managers or
leaders who lack the other trait that an ideal leader has: a manager who lacks
vision and a visionary who lacks management. By being able to influence others
charismatically, a leader can get others to embrace his or her vision.
Conversely, one may have an excellent vision for an organization but be unable
to persuade anyone else to adopt that vision. A charismatic manager, on the
other hand, can positively impact the morale of his or her employees while
managing them as resources by persuading them that the constraints he or she is
placing on the employee are more agreeable than they might otherwise be coming
from a less persuasive manager.
Monday, August 3, 2015
A bad reading
When I was teaching writing, I would often have to tell
students to ensure they were in control of the language that they were using.
Don’t use a fancy word just for the sake of using a fancy word if you are
unsure you are using in a proper context (probably
influenced by Orwell). Otherwise, it is better to use the simple word in
the correct context for the sake of clarity and exactness. A fancy word should
only be used when it is more exact than the simple word. That is what came to
mind as I tried to read Gifreu-Castells and Moreno’s “Educational multimedia
applied to the interactive nonfiction area. Using interactive documentary as a
model for learning.” The language was so out of control that I found myself
using an excessive amount of cognitive load to decipher the text. Too often,
the article veers between the painfully obvious and undecipherable nonsense.
Fortunately, the article is from a conference proceeding and not actually a
published work.
On every page, there is some incongruent turn. Why, for
instance, bring up Piaget in the second paragraph of the Introduction, except
to attempt to give the appearance of being educated? Do you mean that children
should construct “knowledge by physically interacting with media and
objects”? Or adults? Did Piaget study
adult learning? Or is the interactive documentary field aimed at children? Bringing
up Piaget in such a way is confusing. The same could be said when the authors
bring up the subjects of blended learning and MOOCs in the second paragraph of
Section 4.3. It is as if they are interested simply in throwing as many
educational buzzwords as they can. The painfully obvious tautology of “InterDOC
mainly provides online content, so it can serve as online learning material” is
used to describe how the interactive documentary can support blended learning.
The next sentence bring up MOOCs, relating their appearance to some fallacious
idea that “collaborative learning has become the new dominant trend.” The only
way that sentence makes any sense is that they are referring to cMOOCs, and not
the more teacher-directed xMOOCs.
Rather than making any such clear distinction and attempting to be informative,
they appear to be more interested in just throwing in another buzzword to
appear “current.”
The “hypothesis” they seek to test—“that interactive
documentary could be a suitable education tool because it offers new ways to
approach, understand, play and learn from reality”—vacillates between the
obvious (of course, it’s a suitable education tool) and the nonsensical (even
ignoring the poor grammar, is the genre of interactive documentaries really
providing a new way to learn? Neurobiologists might have a
different opinion). The source material, the interactive documentaries, are
actually interesting as a subject matter. An actual testable hypothesis would
be comparing the learning, engagement, and emotive-ness of the regular
documentary with the interactive version. Does one actually learn more from
interactivity or does the cognitive load of interacting with the material
interfere with how much one learns? Similarly, do viewers (or “interacters”)
actually spend more time with the interactive documentary? Finally, would they
report greater or lesser emotional attachment to the subject matter based on
the interactivity? All three aspects of the hypothesis are easily testable: the
first through a post-test; the second through recording visitors time on site;
and the third through self-reporting. Rather than just assuming that
interactivity is better, they actually could add to our knowledge base about
the subject.
Subscribe to:
Posts (Atom)