Why are We Still Debating the Proper Framework for Social and Emotional Learning? A Case Study for the Times
By Dr Richard D. Roberts, CEO and Co-Founder of RAD Science
I am a firm believer in the people. If given the truth, they can be depended upon to meet any national crisis. The great point is to bring them the real facts.
Abraham Lincoln, circa 1849
If you look at country systems, education, and health during the current pandemic, one of the hottest topics to emerge is social and emotional learning (SEL). It is given as THE reason in America, if not many developed countries, schools should, nay must open-up, even if a few students, parents, grandparents, family, and friends may die along the way. The argument is kind of murky in medicine, though organizations such as the American Academy of Pediatrics are apparently behind it. I tried to find a definitive piece on the ROI of this approach and how much SEL mattered, the best I got was this (from a national survey, no less). While showing the well-being of parents and children was compromised in around 10% of the population, the study’s caveats are telling: “We also observed improvements in behavioral, mental and physical health for a subset of the population and no changes in these domains for the majority of the population, highlighting the heterogeneity in the effects of the pandemic and its consequences on families”.
There is no educational system in the world to my knowledge that has ever given this much attention to SEL; now it is deemed the official motivation as to why we must get children back to school. I have been working in education for three decades, waiting for SEL to have its day. Now that it has happened, I first felt a mixture of “duh” and excitement, but the more I introspected the more I became disgruntled, then peeved, and ultimately deeply concerned.
The reason? For every small, scientific advancement made in the last three decades, there has been one step-up and two-steps back (Springsteen fan).
What do I mean by this/that? Let us take a little peak under the covers of the science, and how it plays out over time. After a series of comprehensive reviews distilled both in a couple of award-winning books and peer-review articles (examples, here and here), we concluded there were myriad issues with SEL programs, and complimentary assessments. And so nearly two decades ago we questioned SEL programs, while often in parallel obliterating attendant assessments. Each of these critiques were bold, audacious, and in my humble opinion far too negative. The field was new and deserved a little wriggle room. But it caught the attention of many people and likely influenced how major groups that we had reviewed rather negatively approached evaluations, assessments, and policy perspectives made thereafter.
But here is the rub. The field has – despite 100s of LinkedIn articles and twitter posts every week – not gone nearly half as far as a meaningful science-based enterprise, as the passage of time might have projected. There are few studies that point to the efficacy of a particular intervention, let alone programs; and the meta-analytic evidence while emerging as suggestive, is not nearly as earth shattering as proponents sometimes suggest. (Nor should it be for that matter. Science tells you small effects matter, and the idea that some of these social and emotional skills are twice as important as cognitive skills was poppycock from the get-go).
And attempts to define a universal overarching framework have been hit or miss.
Indeed, out there is a virtual tower of Babel caused by a proliferation of SEL models when this was (arguably) resolved over two decades ago. How does this play out? Unfortunately, with an overabundance of misinformation! The so-called EASEL Taxonomy project should be the bees-knees when it comes to evaluating SEL taxonomies, with extensive external funding (they list over 46 sources on their web site) they have a tool that can allow all possible comparisons between these models. Try it here: You can plug in your favorite model.
Sadly, the EASEL Taxonomy project is misled and it may be doing perceptible harm to the nearly three decades of research in the field. I started by using it to do some cross-walks and found after inputting multiple models I had deep expertise and experience with (e.g., CASEL, Big Five, ACT Holistic Framework, Emotional Intelligence) that it often had noteworthy misses and false alarms, and it just got worser and worser the more I dug. In the first version of this blog, I indicated I was working with EASEL’s thought leadership to correct what I consider to be systematic errors. That meeting was subsequently declined and this sentence serves as an addendum to that statement.
I said the issue of the best framework was resolved ages ago and indeed it was. If you wish to talk about human traits — grit, growth mindset, emotional stability, empathy, achievement striving, social and emotional intelligence — take your pick, you end up using adjectives and many, many years these all got distilled into two models: The Cattell-Horn-Carroll model for cognitive abilities and the Big Five Factor model for all the other stuff (sometimes called noncognitive skills, personality, SEL, behavioral, or transversal skills; take your pick, we know what is meant). If you are going to talk about SEL, it likely represents some combination of these two models and if not, you may have invented new terms and concepts, but it will be incumbent upon you to show how they differ. In the case of many such candidate abilities (trait emotional intelligence, grit) history has not been so kind (for an example of just how poorly this turns out, see here).
It is vitally important to point out these two models also have voluminous scientific studies attached to them; that is, peer-reviewed papers in the tens, if not hundreds, or thousands. These papers touch on all sorts of issues that are important for human growth and development socially, emotionally, academically, developmentally, even pharmacologically, including what exactly Big Five and cognitive ability models predict, both alone and in tandem (e.g., GPA, absenteeism, leadership potential), measurement approaches (these days, extending well beyond self-report), consequential validity, the extent they correct for adverse impact if used in selection models, the order of how important they are in applied settings, to name but a few; the list is singularly impressive. And, these models extend far beyond education to include the world of work, mental health and wellbeing, military applications, policy implications, and developing meaningful targeted interventions, across the life course.
In sum, every other social and emotional learning model will be playing decades long catch-up to rival the expansive data bases accompanying the Big Five and CHC models. Moreover, as we will show in future blogs, even if eminently more sophisticated on first blush, these other models prove themselves largely redundant considering this infrastructure. It should also be pointed out that the Big Five and CHC model are purely empirical and not driven by armchair speculation, committee, or the cult of personality (pardon the labored pun). And they can seamlessly provide a straightforward mapping back to the type of social and emotional problems children and parents may suffer as a result of COVID (for example the Big Five almost ended up being a framework for interpreting clinical symptomatology in the new DSM-V; resultingly we have many good insights into how it will truly impact children and adult’s lives). Couched largely in the lingo of positive psychology and its oft-times populist spin, the same cannot be said for the remainder of virtually all other SEL models. And paraphrasing Lincoln — albeit slightly — it is in the factual basis of science we should rest our faith during this time of global crisis.
About Dr Richard D. Roberts
Richard D. Roberts, Ph.D. is currently CEO and Co-Founder of RAD Science. A scientist-practitioner, for three decades he has been at the forefront of the research and development of cognitive (e.g., ASVABTM) and noncognitive (e.g., SELF+eTM, ACT TesseraTM) assessment systems. Dr. Roberts has published over a dozen books and more than 200 articles on these topics in diverse sub-disciplines, including education, workforce, health, and public policy. Previously serving in senior leadership roles at the Educational Testing Service, ProExam, and ACT Inc., Dr. Roberts has worked closely with major organizations such as the Intelligence Advanced Research Projects Activity, OECD, Army Research Institute, Australian Research Council, and the Bill and Melinda Gates Foundation. Among Dr. Roberts’ professional honors are two ETS Presidential Awards, two PROSE book awards, a University medal, and a National Research Council Fellowship.
Dr Roberts on LinkedIn
Research and Assessment Design (RAD): Science Solution (RAD Science) is an evidence-based assessment, learning, and research organization, committed to serving students, educators, life-long learners, and workers in all sectors of society by providing tools to measure and develop social and emotional learning (SEL) and cognitive skills. We have one of, if not, the largest pool of SEL item banks in the industry, covering myriad unique measurement approaches and disparate education, workforce, and policy sectors. Visit the website for more information on its mission, staff, products, and services.
RAD Science on Twitter