Sunday, November 15, 2015

Knowledge construction and assimilation: A cheat sheet for when learning is more likely to occur with adult learners?

In order for  learning to occur knowledge must be constructed and then assimilated:

-With hands on experience.

-With repetition: not to be mistaken with rote memorization.

-With familiarity: making use of familiar settings experiences and integrating prior knowledge.

-Generalizing of materials is more likely to occur when taught with the expectation of application to other context. 

-When more concrete and less abstract: there is research that indicates only 1/3 of people progress to abstract level thinking, stage 4 of cognitive development according to Paiget.

-Adult learners need compelling reasons to continue pursuing personal, professional, and academic goals.

-Enable thoughtful participation in activities that generalize to workforce and future possible selves/ situations; enable transference of knowledge through activities, discussions, projects, milestones,…

  • Connect to prior knowledge.                      
  • Construct knowledge through activities, exploring, discussion, inventing and discovering.           
  • Application of course material to real world context through activities, discussions, projects, field trips, internships, and mentoring.
  • Cooperative learning. sharing, responding and communicating to others through activities, small group and large group discussions, and in class projects.
  • Generalizing, transfer learning by using prior knowledge from familiar contexts, previous coursework information or bridge program to current course and build on what the student knows, life experiences, story telling (vicarious experience), project current situations to future context to encourage transfer of material to appropriate future settings.

Saturday, August 22, 2015

Why Isn't My Article Published?

Tips for what may be wrong with your article (Phillips, 2015, p.1)

  • Wrong journal
  • Too long/short
  • Journalism
  • Extract from report/dissertation unadapted
  • No clear topic
  • Too little context
  • Too little theory
  • Clear gaps in literature
  • Polemical
  • Research not fully explained
  • Failure to relate findings/conclusion to aims/theory/literature
  • Language/style not checked
  • Text not proofread
  • Not 'situated' in comparative education
  • Plagiarism/legal issues

For more information see:
Philips, D. (2015). Who gets published? Comparative Education, 51(3), 303-304.

Tips for the novice - Six tips for publishing in research journals for the novice.

Thursday, July 23, 2015

Free Community College in California: Will This Face-lift Positively Impact Disparate Outcomes?

California surfaced as one of the states in which there are significant disparities in educational outcomes. Changes are needed; grants (state and private) are available to this end that are high profile, and are based on quality, longitudinal data. The need for addressing the issue of outcomes (including through providing free community college) is receiving attention at federal and state levels. 

However, a national survey of community college presidents found skepticism regarding the prospect of free community college coming to their states even if federally funded; move over, these presidents suggest that too many choices are a significant reason hindering degree completion (Jaschik & Lederman, 2015). 

Let's not forget there is also general skepticism that exists among the public. As discussed in a Forbes article (Kelly, 2015), we know from the data about community colleges that "free" does boost enrollment numbers, but this probably won't translate to improved outcomes. 

It is my hope in writing this post that more educators will engage in the difficult conversations surrounding access issues in higher education.

The following topics are discussed briefly:
  • Statistics on California College Outcomes
  • California Community Colleges: Practices to Examine
  • The Uncomfortable Necessity of Change - Recommendations
  • Concluding Comments
Statistics on California College Outcomes

The following data reflects California colleges:

Completion of a college degree after 6 years for California and is from Campaign for College Opportunity (2015) :
  • Community College 49%  
  • California State Universities 51%
  • University of California 84%  
California Community Colleges: Practices to Examine

Community colleges educate a significant number of Black/African American and Latino students (Campaign for College Opportunity, 2015). Emphasis on improving education at this level would benefit may students and open access for degree completion and transfer to four year colleges and universities. Impediments exist that are systemic. Data reported by the community colleges reflects the success rate of students that complete courses, programs and those that transfer (California Community Colleges Chancellor's -Data Mart). Information that is reported does not sufficiently reflect the many students that withdraw from courses. More reasons contribute to failing to complete programs than the number of choices of programs and courses. Some unintentional and potentially systemic factors may include reasons students withdraw  from courses and the impact on community colleges, misunderstandings surrounding "content expertise," and identification of factors that need to change before new program models or financing are embraced. Without examination of these and other potential contributing factors, new programs and funds will be filtered into existing systemic practices and continue disparities in outcomes. 

What Does a "W" Mean Anyway?  The number of students that withdraw from courses does not seem to be included in the options for outcome measures listed on Data Mart. A "W" is not something a student wants on a transcript, and certainly not too many in order to transfer, but a "W" is better than sub-par grades. Students, knowing this, navigate the community colleges the best they can and if grades are low students will drop. Counselors commonly recommend dropping if this is the case and this is the right thing to do. Options to repeat courses should continue, because some students are under-prepared, life presents problems that interrupt studies, instructor style or experience and other factors may contribute. 

Blaming failure on students instead of improving how the courses are taught is easy.  Many instructors lack of understanding the learning process, student motivation and test construction (e.g., Bloom's Taxonomy; Krathwohl, 2002). Pedagogy is hardly discussed and less understood. Academic freedom seems to impede discussions surrounding the practice of teaching (aka pedagogy), because mere suggestions about instruction seems to be met with responses that this is part of academic freedom; difficult discussion are necessary (Community College Research Center, 2014). 

Financial Benefit of  a "W."  A slight discouragement in the financial compensation received by colleges would be appropriate when students withdraw so that compensation for students that complete the course would not be equivalent to those that withdraw. When compensation is the same, the more that drop, the better for the college because students need to attend another semester, and the better for the instructor because the work load for the current semester is reduced and the students will be back. If legislators would pay perhaps  a percentage instead of the full rate for the course and students were charged accordingly, maybe 40-60% for students that withdraw with a "W" and only provide full compensation  based on completion of the course. While the emphasis would be on full pay for students that complete the course, there would be a marginal pay to acknowledge attempt to education students that withdraw to validate the effort of the colleges and instructor efforts and time. The percentage should cover cost but discourage this as a practice one way or the other and would require large scale implementation and evaluation to achieve a percent that has this effect. Focusing on completion would drive improving the practice of teaching at the community college level. This would divert the temptation to blame the students for the failing outcomes. 

There is a buzz about "free community college." Free college that increases enrollment does not address the course/program completion or instructional issues central to the educational experience. In California at this time the grant that is driving the planning for this possibility offers for "free" credit and non-credit courses that are remedial in nature or lead to very low paying entry level jobs at best (AB86), and not the intended "live-able wages" specified in the grant language. While California may be the next to provide this opportunity (Fain, 2015), it is doubtful in this form that Obama's hopes will be realized.

The Problem of Context Experts and Applied Experience. Content experts are just that - experts in content. Because professional development funds were reduced more than 10 years ago by the state, a common attitude among community college educators is, "We're content experts - you don't get the best teaching here." The way instruction is delivered can lack organization, burying students under loads of content, and with the expectation that the students should organize and make sense of it. Community education is a mission and commitment and there is a need to engage faculty in conversations to this end. An instructor's repertoire should include the ability to organize and present content in segments that build student knowledge and understanding how to test for understanding specifically aligned with the instruction provided. 

We expect students to go to community colleges to bolster up grades for transfer or for technical degrees or certificates, or remedial instruction, and students with these needs require instructors that know how to organize and teach the content and are able to make changes to the pedagogy to improve learning outcomes (Bickerstaff & Cormier, 2015).  

When was the notion that "content expertise" was sufficient in and of itself to qualify an educator, and is it anywhere in peer reviewed research journals? Its not there. We know that applied experience is beneficial and can help with understanding, but many functioning at this high level have difficulty in translation of concepts to levels at which learners understand. In addition, there is a lack of understanding what information is necessary at the 100 level, 200 level and/or as prerequisite information for sequential courses, especially among adjuncts/part-time instructors, which, due to budget constraints, make up quite a lot of the teaching force in the current educational climate. This gap, similar in nature to understanding text construction and alignment with course content, is understanding alignment of instructional goals and course outcomes- Bloom's TaxonomyIt is the few who see the need, and invest the time in how the learning process works that make a difference, but these are the few because change takes additional expert level effort. 

The Uncomfortable Necessity of Change - Recommendations

Breaking barriers down is hard work. Change is uncomfortable. However, if higher education continues on the current trajectory, barriers will persist, perpetuating social classes that are very difficult to move in or out. A simple review of sociology literature confirms this problem. 

While there are more questions than answers and, indeed, there are many confounding factors. Some educational practices should be kept the same, because in education there are tried and true educational practices that play out in research and practice. While these need a face-lift from time to time because of improved understanding about the learning process, technological advancements, understanding how technology can improve and influence learning, social historical context, and individual learners, changes to what we know works is unwise.

What should change?

  1. Include students that withdraw in measures of outcomes. This will be telling about populations that are served, providing a more accurate picture of outcomes and disparities.
  2. Survey students (surprisingly some educational institutions/unions prevent or do not practice this - unfortunately this is true of community colleges in California). While this should not be a hiring and firing tool it is an indicator of student learning experience, and potential areas for growth.
  3. Align educational outcomes with current job practices; there is a gap in the expectations from the hiring process to the job expectations. Better alignment  could reduce poor educational outcomes.
  4. Have third party accountability. This must be a party that is financially invested in some way and does not stand to benefit regardless of changes or outcomes. When entities are vested it keeps them concerned about the aspects of each organization. 
  5. Ensure that those hired as educators are aware that "content expertise" is the starting point and understanding learning process, student motivation, and alignment of instruction and test construction (e.g., Bloom's Taxonomy) are necessary.  
  6. Alter the compensation colleges receive for students that withdraw from courses and place the emphasis on completion to drive the outcomes in a positive direction. 
  7. Finally, do encourage hard and honest work from students. None of these problems excuse shoddy student efforts. It has become increasingly important to clearly define academic honesty and have accountability systems well in place. No good can come of people in positions gained through academic dishonesty and in some professions more than others this has the potential to cause harm.

Concluding Comments

The lack of preparation that exists among typical students enrolling in community college for academic and workforce pathways leads to an arduous process that requires choice, persistence and effort at the individual level. However, in the defense of those that would like to make the choice and begin the path, institutions of higher education make the process so difficult a hardy individual could easily be dissuaded from persisting in a new path without bigger reasons to engage in the pursuit. Bronfenbrenner is an important lens through which to view these struggles.

It is common knowledge access is a problem that needs to be addressed. The results of recent changes to large scale programs by throwing out an older model or system in favor of a newer model or system seems only lead to the the same problems with new wrinkles to be ironed out in the new model or system. What this translates to? Shifting of funds with the same people in change who found a new way to frame an existing program or an old program shelf-ed due to lack of funds, and slowly discovering the new problems. How to avoid a bureaucratic exercise?! An honest look at what is in place, and consideration of the outcomes, meaning not only those that complete the courses. If only there was an effectopedia test for education. However, simple answers are naive because there are many confounding factors that contribute to disparate outcomes. 

A final thought -  in a completely unrelated, yet interesting story, a zoo in the third largest providence in China was closed for an unusual problem. Some visitors to the zoo heard the bark of the zoo's "African Lion."  The "African Lion" was really a  Tibetan mastiff. The zoo keepers stated the lion was away breeding and the dog was there for safety reasons, but there were also two snakes substituted for sea cucumbers, a white fox "impersonating" a leopard, and another dog as a wolf. A zoo representative said, "We're doing our best in tough economic times."

As budgets are strained and measures of outcomes and increased accountability loom, keep sight of the main thing - serving students. Let's not substitute "barking lions" for the real product. If students are served the funds will come. Well done efforts are self-sustaining.

Selected References

Bickerstaff, S., & Cormier, M. S. (2015). Examining faculty questions to facilitate instructional improvement in higher education. Studies in Educational Evaluation, 46, 74-80.

Krathwohl, D. R. (2002). A revision of Bloom's taxonomy: An overview. Theory into practice, 41(4), 212-218.

Community College Research Center. (February, 2014). Inside out: Faculty orientations towards instructional reform. Scaling Innovations Project.  Retrieved July 7, 2015 from

Fain, P. (2015). Free community college catches on. Inside Higher Ed. Retrieved July 10, 2015 from

Kelly, A. (2015). Four reasons to be skeptical about Obama's free community college proposal. Retrieved on July 11, 2015 from

Anonymous. (2015). The state of higher education in California. Retrieved July 10, 2015 from

Sunday, May 31, 2015

Not “Me First,” No – You First, I Insist!

The scene that greeted me as I entered COSTCO (it could have been Sam' Club, Walmart, Big Lots, outlet malls, Target, you name it,...anywhere) earlier this afternoon was a conflict between drivers. One had cut the other off, or vice versa, and emotions were boiling over. A few people from differing ethnic back grounds,  not identified here, because, let’s face it, this could be anyone of any color, creed, etc…, They were to the side of the main entrance. I hurriedly walked past into the store, cops already called, store personnel present, and, frankly, I don’t like the anger.

We all know the trend in our world is “Me First!” This starts young. Once, as a preschool teacher, I was supervising on the playground and witnessed the following unfold before my eyes:  A little girl, new to the school, that spoke virtually no English, but had a few phases memorized, rushed up to a little boy riding by on his trike and pushed him clean off, “splat!”, with the battle cry, “I had it first!” Another teacher, nearer to the incident, helped the boy up with one hand and held the little girl’s hand with the other. Throughout this quickly transpiring event the pretty little gal repeated loudly, with tears and conviction, “I had it first!” “I had it first!”

I know we all grow up. These early idea about things need to be fair and exercising our rights....well, I'm not naive. I know there is a place for this. Stand up for what's right; however, I think its high time for a societal makeover.

It is well documented that we have a profound influence over other people. There is a study showing that if one person in a parking lot takes an advertisement off their windshield and walks it to a trash can, others will follow this example. If a person throws it to the ground, the ground ends up littered with trash by others that do the same. If in a dormitory setting people are asked to conserve water and one or two people follow the suggestion to get wet, turn off the water, suds up, turn the water back on and rinse, then others will do the same. (Of course we know they have turned off the water by the sound, not sight, there are shower curtains, um yeah). So for reference on this and a clear and scholarly explanation see, The Social Animal by Aronson (2008), a book on social influence we have. Its insightful.

Until then, not “Me first!”, but let’s change it up, “No, you go first, after you, I insist!” 

Friday, May 1, 2015

I Just Need to Take English and Math Placement tests? Really?

Why? Why ? Why are so many educational institutions living in the past? 

When reviewing some research for an article I'm writing, I was a little taken back. The article, on self-regulated learning (if you must know), gave a brief overview of ideas about learning that influenced Western education. Among these, Thurstone (1938),  provided what was thought to be a perfect description of the abilities of students (Primary Mental Abilities Test).

While these ideas contributed significantly at the time, and have relevance today, we've learned a few more things since.

In spite of these new understandings, these testing practices persist today. The idea is that the right test can classify and place students in just the right level for optimal instruction, for example the right math groups, or the right English class. This placement practice consistently yields poor results overall, yet this practice is still widely used as a primary placement tool today from preschool to college.

While I am not arguing to throw out testing, tests do provide some good information, what I do know, more than this.... What we know from research today, is that self-regulation has far more to do with successful academic outcomes than performance on a placement test. Testing along is not enough.

We need more than the right tests for correct placement.

Monday, February 23, 2015

To Conform or Not to Conform?

Conformity is all around us. While it sounds negative to label someone a "conformist", saying that the person is a "team player" is appealing. Isn't this the same thing framed in a more palatable way? We celebrate historical figures for non-conformity, especially those political figures that proved to be on the right side of things in retrospect, but these non-conformists often faced opposition and strife. This is true of Lincoln, Rosa Parks, Martin Luther King Jr., ... to name a few.

Conformity is a reality of the world in which we live. Social scientists describe stages of development of the self that surround the notion of how others perceive us (e.g., Looking Glass Self).

We, as human beings, are highly influenced by those around us. A classic psychology experiment by Solomon Asch (1956) with American college students over 50 years ago demonstrated how much we are influenced to conform. Its surprising! He conducted a social experiment in which he asked a very simple question to a group of students. One of the students was set up. All the other students were told to pick the same wrong answer. Asch found that when presented with a very easy problem, one that had an answer that a child could figure out, if everyone gives the same false answer, 75% of students gave the same incorrect answer that the class chose 75% of the time. The questions were asked more than once and when all answers were added up an average of 35% of all responses (the guy who didn't know) conformed to the wrong answer.

So this means there are situations when in groups that we can feel pressure to knowingly choose the wrong answer. This experiment and others like it have been repeated many times. In fact this study was repeated with participants that were monitored with fMRI technology. What was found?

Basically, it is a painful experience to go against the group's position (Burns and colleagues, 2005). The activity in the area of the brain that shows pain and discomfort (e.g., amygdala) was very busy for the students that went against the the group's wrong answer. The brain image of the guy that went with the group opinion wasn't nearly as disturbed.

What could we do?

I suggest  know what you think and believe in advance or the group can easily influence you.

IME 2015 Conference Keck School of Medine at USC Oral Presenation,

The following is the longer version of the paper submitted to IME 2015 (conference syllabus).  Clicker here to view the oral presentation (googledocs) or here for a youtube version and clicker here to view the Q&A.

The Effect of Audience Response Systems on Metacognition in Graduate Students: A Two-Year Mixed Methods Study   

Melanie Brady, Ed.D., University of Southern California, Rossier School of Education
Jane Rosenthal, Ed.D., Assistant Dean, School of Applied Life Sciences, Keck Graduate Institute
Christopher P Forest, MSHS, DFAAPA, PA-C University of Southern California, Keck School of Medicine, Division of Physician Assistant Studies

Problem Statement
Use of educational technology to engage learners continues to grow at a rapid pace. Studies of effectiveness of clicker use find that when clickers are utilized with research-based instructional strategies the learning experience in large lectures is enhanced2. In a study with undergraduates (n=198) metacognitive self-regulation seems to improve when clickers were utilized in this manner. Comparison (low technology) and experimental (clickers) methods each demonstrated significance influence on learner metacognition, clickers with the summer cohort and the comparison method with the fall. However, when performance outcomes and qualitative data were factored in, clickers demonstrated a high degree of significance (p> .01). This current mixed methods study of audience response systems and metacognition investigates whether the experience for graduate health science candidates (e.g., 1st year Physician Assistant candidates 2013 and 2014) is consistent with the undergraduate experience and to what degree between graduate cohorts (2013-2014). 

The importance of these investigations lies in the growing body of research that self-regulated and metacognitively aware learners tend to have improved outcomes and that metacognition and self-regulation are teachable. Research suggests when clickers are utilized with instructional strategies (e.g., questioning and peer instruction), performance outcomes increase and metacognition may be affected. Metacognition, the regulation of cognition and self-knowledge, is an essential component in the learning process in order to become a self-regulated learner. This mixed methods comparative study examines the extent to which high-tech devices (clickers) and low-tech devices (paddles) affect learner metacognition. This study extends our 2013 mixed methods examination of clickers and metacognition conducted with 1st year Physician Assistant candidates and further comparison between clickers and paddles.,1,2  If the data generated by the two years is not robust enough, a third cohort is proposed for fall 2015 to increase the strength of results and potential for generalization. 

The response device that more effectively influenced metacognition would be  associated with higher performance outcomes. Based on the results of the undergraduate study we predicted that use of clickers would lead to less social comparison which could enable more productive learning; use of paddles would lead to more social comparisons that could interfere with the learning process.

Data were collected from 54 graduate candidates in 2013 and 51graduate students in 2014 during a behavioral sciences course.  Clickers were used during weeks 1-5 of the course and a low technology response system (paddles) during weeks 8-12. Paddles are handmade signs are held up to indicate preferred answers (A-E); this method was selected for comparison as an analogous system to clickers in that it provides a quick visual check of student responses that allows participants to be polled once as opposed to raising hands several times for a multiple choice question. This comparative, mixed-methods study employs several measurement instruments and a pre- and post-test design to compare the two response systems. The components of metacognition of interest in this study are Metacognitive Judgments and Monitoring and Metacognitive control and self-regulation. 

Quantitative instrument. In the first week of the course, pre-test data and demographic information were collected. Questions from the Motivated Strategies for Learning Questionnaire (MSLQ)3 served as the pre-post-test instrument. Two instruments that measure feedback systems and metacognition1 were administered at week 5 (experimental/clickers) and at week 10 (comparison/paddles) Metacognition in Lecture Survey2 (2013-clickers, α = .910; 2013-paddles α = .935; 2014-clickers α =  .806; 2014-paddles α =  .888) measures metacognitive self-regulation experienced by learners in lecture through changes in learning behavior inside or outside of lecture and Metacognition Attribution to Response Device Scale (2013-clickers, α = .723; 2013-paddles α = .704; 2014-clickers α =  .681; 2014-paddles α =  .827) measures the level of metacognitive influence that learners believe to experience as a direct result of the use of the polling method. Mean quiz scores from the first 5-week session served as the measure of performance outcomes for clicker use, and the mean participation scores for weeks 6-10, for the comparison treatment (paddles). 

Qualitative instrumentation. Participants completed an on-line qualitative survey using Qualtrics© that consisted of open-ended questions to elicit reflections about response device use.   Interviews were conducted using purposeful sampling using the following criteria: 1) low mean scores indicating little metacognitive influence attributed to clicker/paddle use; 2) mean scores in the median range indicating a moderate-to-neutral influence; and 3) high mean scores indicating a strong influence. 

High comfort level and prior use of audience response systems were reported by 60% of participants from the 2013 cohort and 100% of participants from the fall 2014 cohort on the initial survey.  Two tailed t-test for dependent means were conducted to examine between groups differences in metacognitive self-regulation, the pre-post-post –MSLQ), and performance outcomes (e.g., in lecture clicker quizzes. Significance was not found between the Graduate Health Science 2013 and 2014 cohorts on the pre-post-post-test administration, but significance was found with metacognition instrumentation. Lack of significance between pre-post-post-test between groups indicates group similarities which increases the potential strength of results and ability to generalize. The first post-test administration followed use of clickers, the second post-MSLQ following the comparison method (low-technology). This indicates that learners in both cohorts gauged individual metacognitive self-regulation similarly at the start of the course, following the treatment method (clickers) and following the comparison (low-technology polling). 

Differences were found on formative performance assessments between the 2013 cohort (M = 87.38, SD = 5.86 and the 2014 cohort: (M=75.41, SD=6.27) demonstrating differences in metacognition during lecture of the two groups (t(52) = 10.263, p = .001). Significance was demonstrated between groups for both instruments measuring influence of metacognition during lecture, and the attribution of metacognition to the response device (t(52) = 4.84, p = .001; t(48) = 5.83, p = .001). Qualitative analysis results were similar between groups. Clickers were perceived as a more effective way to monitor learning and the low technology method resulted in conformity and reduced pressure to prepare for lectures. Differences occurred in that a small portion of the 2014 cohort suggested that the low technology system created opportunities for discussion and learning and was enjoyable while the majority of peers in the same cohort did not share this opinion, nor did the 2013 cohort. Reports of positive learning experiences with paddle use tended to accompany indications of relief at the ability to rely on group lieu of individual preparation when schedules were busy

Lessons Learned
Quantitative results indicate that clickers influence learner metacognition more so than low technology response devices, suggesting that conceptual understanding may be clarified through use of clicker items and interactive teaching strategies (e.g., questioning, and peer instruction) leading to improved formative feedback for enhanced learning. Both cohorts indicated that clickers strongly influenced peer comparisons, consistently positively influencing the learning process. Clickers can improve accuracy of metacognitive judgments and influence strategies utilized for learning outside of lecture. Qualitative results suggest that graduate learners are more confident in strategies utilized for note-taking in lecture and for preparation for lecture. Several learners reported changing answers based on peer responses, and feeling less pressure to prepare for lecture when the low technology system was utilized. Focus of clickers on independent learning improved ability to monitor learning and results indicate that learners are more apt to prepare for lecture with individual response component. 

Selected References
1. Brady M, Rosenthal J, Forest C. Metacognition and Performance Outcomes: An Examination of the Influence of Clickers with 1st Year Graduate; in progress.
2. Brady ML, Seli H, Rosenthal J. Metacognition and the influence of polling
systems: How do clickers compare with low technology systems? Educ Technol Res Dev. 2013;61:885-902.
3. Pintrich, PR, Smith DAR, Garcia T, McKeachie WJ. Reliability and predictive validity of the motivated strategies for learning questionnarie (MSLQ). Educ Psychol Meas. 1993;53(3):801-813.