Manuscript with arrow icon Book and magnifying glass icon Cross-check icon Process checklist icon Reputation ribbon icon Graduation cap icon Question speech bubble icon Headset call icon Mobile phone call icon Login arrow icon B+ Paper Icon Becoming B+ Paper Icon Checkmark Paper Icon Feedback Speech Bubble Icon Feedback Double Speech Bubble Icon Similarity Check Icon Professional Development Icon Admin Training Icon Instructor Training Icon Student Training Icon Integrations Icon System Status Icon System Requirements Icon Menu Icon Checkmark Icon Download Icon Rubric Icon Prompt Icon QuickMark Set Icon Lesson Plan Icon Success Story Icon Infographic Icon White Paper Icon White Paper Icon Press Release Icon News Story Icon Event Icon Webcast Icon Video Icon Envelope Icon Plaque Icon Lightbulb Icon Insights Lightbulb Icon Training Icon Search Icon User Icon Privacy Icon Instructor Icon Instructor-1 Icon Investigator Icon Admin Icon Student Icon Voice Grammar Icon Turnitin Logo (Text and Icon) Icon Facebook Icon Twitter Icon LinkedIn Icon Google Plus Icon Lightbulb Icon Binoculars Icon Drama Masks Icon Magnifying Glass Icon Signal Check Indicator Bars Red Flag Icon Analysis and Organization Icon
Contact Sales

We shared some of Roslyn Miller, Ph.D. and Amanda E. Major, Ed.D., CPLP, PMP’s guidance on authentic assessment with integrity in an earlier blog post. Our discussion with Miller and Major was extensive and their contributions generous; we would feel remiss in not offering you their further thinking on other components of assessment, such as STEM and ELA assessment, which we felt was keenly relevant to educators.

Miller and Major are instructional designers at the University of Central Florida. As a team, their expertise covers both public and private educational institutions, secondary and higher education, and subject areas spanning both ELA and STEM, all focused on effective assessment. 

So in this post, we present Roslyn Miller and Amanda Major’s insights on STEM and ELA assessment--the differences, the intersections, and ultimately, the best assessment practices.


Are there differences between ELA and STEM assessment? And if so, what are some reasons for the ways in which we gain insight into student learning?

Miller speculated that differences between assessment in STEM and ELA are connected to practical matters like class size. English and Language Arts, with typically smaller class sizes, enable assessments with more feedback loops and make more frequent assessments possible. On the other hand, STEM courses typically have larger class sizes, which makes assessments logistically stressful and overwhelming. 

Roslyn Miller shared, “The humanities, in my experience, typically have smaller class sizes with more authentic assessments in the sense that they tend to have smaller, more frequent assessments. They’re more about the application, more drawing out individual responses, and students are able to get more personalized feedback throughout a course.

 Whereas in STEM and business, I have seen typically larger class sizes. One reason for this, I believe, is assessments are assumed to be automatically, or at least efficiently, graded types, such as selected-response so that instructors are not expected to have to spend so much time evaluating individual assessments. So if automatically-graded assessments are necessary, we have to think about how to create test items that ask students to apply, analyze, evaluate, or create (prompting higher-order thinking skills) rather than default to primarily low-level test items that ask students to merely recall or recognize content.”

Bottom line: due to practical concerns, there exist unique trends in STEM assessment.  


What does STEM assessment look like? Aside from fewer, high-stakes assessments--a quick look at higher education syllabi can affirm that high-stakes summative assessments compose the overwhelming majority of a course grade--STEM assessments tend to focus on one right answer. The downside to doing so is that this may result in a lack of transparency into how students think and if they have a command of higher-order thinking skills. It may cause unnecessary stress among students and impede an accurate capture of student learning outcomes. 

Miller stated, “Sometimes in STEM classes the teaching mode is, here is a type of problem, here is the way to solve it, and here is the one answer you should get for this problem, and this is how you’re going to be assessed. Or, here is a problem--you might have some flexibility in the way that you solve it, but there’s only one correct answer. So, sometimes our teaching becomes relegated to this approach. However, there is so much more to learn in STEM, other than specific problem types with prescribed procedures that lead to one right answer.

There are many complex or scenario problems that have multiple valid approaches and pathways to a valid solution, and it's important to provide students opportunities to approach rich problems collaboratively and with scaffolded, just-in-time guidance, to teach them how to find approaches and pathways to valid solutions, and not teach only problems we already know there’s one correct answer to, and ask them to find that one correct answer.”

Because of the limitations that STEM educators shoulder in large classes and because student responses can be measured with a single response, the ease of summative multiple-choice exams is tempting. Best practices can feel impossible in these situations. 

But what can be done to increase teaching efficacy and student learning outcomes when it comes to assessment across both ELA and STEM subject areas? What are some tools to help educators uphold best practices in assessment?


“I don’t really see the need for a difference in assessment between ELA or humanities and the STEM classes,” said Major. “Faculty teaching those disciplines could be using the same type of assessments successfully. Creative, formative assessments could give students an opportunity to demonstrate their learning of content in a way that seems more natural. Learning could be more like play and possibly a social endeavor. I wonder if a more creative way of determining whether their learners have met course learning objectives is possible in [STEM] courses with higher numbers of students.

Miller responded, “Absolutely, and even though a portion of STEM classes teach from a problem type -- procedure -- one correct answer approach, there is another segment of STEM teaching that has developed curricula of rich problems that help students develop a deep understanding of concepts. These problems often provide a scenario or pattern and ask students to notice and think first, and then to conjecture, test, revise, confirm, and justify their solutions, even in groups.”

“When classroom content and activities are set up so that students are practicing those skills, then the assessments in those classes are created so they are doing those same things,” continued Miller. “Some technology tools have also been developed to also help teachers and students practice this approach to learning. Desmos and GeoGebra are two of the STEM tech tools that provide students opportunities to work creatively with multiple paths to valid solutions, as opposed to giving all students only the same set of simplistic problems and expecting a specific procedure to produce or select one correct solution. It’s important to provide students more rich problems that require some thinking, application, analysis, evaluation, and can even be solved with some creativity.”

Another possible solution for assessment is Gradescope, which upholds frequent, low-stakes assessments that enable insights into student learning all while saving time. 


Assessment upholds the bridge between teaching and learning, per Dylan Wiliam (Assessment: The bridge between teaching and learning, Voices From the Middle, 2013). And despite differing assessment formats and class sizes between ELA and STEM, assessments must provide insights into student learning and teaching efficacy. 

Miller said, “There should be a strong relationship between ELA or humanities and STEM teaching and assessment. Part of the evidence-based field of STEM teaching and assessment is that STEM is not just about solving specific kinds of problems, but about explaining your thinking and justifying your solutions. It’s presenting your solution in a way that someone else can understand it and believe it. So, that’s like a melding of the humanities and STEM, when you are teaching your students how to communicate their problem approaches and solutions.”

Major offered, “It would be really interesting if assessments that stimulate divergent thinking, the generation of new ideas or original solutions, were incorporated more frequently into STEM courses. Humanities doesn’t have a corner on the market for encouraging learners to become critical and creative thinkers. These divergent types of assessments (as examples, prompting students to adapt a solution, combine ideas, eliminate inefficiencies, or brainstorm) if contextualized to an applicable setting, can additionally bridge learning to other settings and enable students to meet performance expectations beyond the classroom.”

2020 was a disruptive time; education was no exception, but it’s also been a time of great innovation. Educators everywhere had to reevaluate their lesson plans, their pedagogy, and their assessments, and recalibrate; and to what did they recalibrate? Best pedagogical practices. And we’re gratified to share with you Roslyn Miller and Amanda Major’s recommendations and move towards assessment with integrity as well as creativity. 

Learn more about Gradescope
Roslyn Miller, Ph.D., an instructional designer with the Center for Distributed Learning at University of Central Florida, has more than 20 years' experience as an educator in public, private, military, and university settings. Roslyn moved to UCF from Mississippi State University, where she earned a PhD in curriculum and instruction and worked as a research associate with its Research and Curriculum Unit. Dr. Miller’s work as an instructional designer with UCF is supported by her experience in teaching, curriculum development, professional development, educational research, large-scale performance assessments, and program evaluation. Dr. Miller has presented at national and international conferences, published articles in education journals, and served as a peer reviewer for education journals. Her research focus is effective teaching and learning, especially in STEM.
Amanda Major, EdD, CPLP, PMP specializes in organizational development, project management, and quality in the field of online learning in higher education. Actively involved in the digital learning field, she has presented at national and international conferences and has peer-reviewed publications. In addition to her credentials, she has earned administration certifications from the Online Learning Consortium and Quality Matters. As an alumna of the Online Learning Consortium’s Institute for Emerging Leadership in Online Learning (IELOL), she served as co-director in 2019-2020 and now serves as a faculty of this internationally recognized digital learning in higher education leadership development program. With experience delivering results in a variety of learner-focused and client-oriented, Dr. Major currently leads and contributes to projects at University of Central Florida as an instructional designer.