Address the originality of student work and emerging trends in misconduct with this comprehensive solution.
Deliver and grade all types of assessments from anywhere using this modern assessment platform.
This high-stakes plagiarism checking tool is the gold standard for academic researchers and publishers.
This robust, comprehensive plagiarism checker fits seamlessly into existing workflows.
Give feedback and grade assignments with this tool that fosters writing excellence and academic integrity.
Uphold academic integrity, streamline grading and feedback, and protect your reputation with these tools.
Improve student writing, check for text similarity, and help develop original thinking skills with these tools for teachers.
Publish with confidence using the tool top researchers and publishers trust to ensure the originality of scholarly works.
We shared some of Roslyn Miller, Ph.D. and Amanda E. Major, Ed.D., CPLP, PMP’s guidance on authentic assessment with integrity in an earlier blog post. Our discussion with Miller and Major was extensive and their contributions generous; we would feel remiss in not offering you their further thinking on other components of assessment, such as STEM and ELA assessment, which we felt was keenly relevant to educators.
Miller and Major are instructional designers at the University of Central Florida. As a team, their expertise covers both public and private educational institutions, secondary and higher education, and subject areas spanning both ELA and STEM, all focused on effective assessment.
So in this post, we present Roslyn Miller and Amanda Major’s insights on STEM and ELA assessment--the differences, the intersections, and ultimately, the best assessment practices.
Are there differences between ELA and STEM assessment? And if so, what are some reasons for the ways in which we gain insight into student learning?
Miller speculated that differences between assessment in STEM and ELA are connected to practical matters like class size. English and Language Arts, with typically smaller class sizes, enable assessments with more feedback loops and make more frequent assessments possible. On the other hand, STEM courses typically have larger class sizes, which makes assessments logistically stressful and overwhelming.
Roslyn Miller shared, “The humanities, in my experience, typically have smaller class sizes with more authentic assessments in the sense that they tend to have smaller, more frequent assessments. They’re more about the application, more drawing out individual responses, and students are able to get more personalized feedback throughout a course.
Whereas in STEM and business, I have seen typically larger class sizes. One reason for this, I believe, is assessments are assumed to be automatically, or at least efficiently, graded types, such as selected-response so that instructors are not expected to have to spend so much time evaluating individual assessments. So if automatically-graded assessments are necessary, we have to think about how to create test items that ask students to apply, analyze, evaluate, or create (prompting higher-order thinking skills) rather than default to primarily low-level test items that ask students to merely recall or recognize content.”
Bottom line: due to practical concerns, there exist unique trends in STEM assessment.
What does STEM assessment look like? Aside from fewer, high-stakes assessments--a quick look at higher education syllabi can affirm that high-stakes summative assessments compose the overwhelming majority of a course grade--STEM assessments tend to focus on one right answer. The downside to doing so is that this may result in a lack of transparency into how students think and if they have a command of higher-order thinking skills. It may cause unnecessary stress among students and impede an accurate capture of student learning outcomes.
Miller stated, “Sometimes in STEM classes the teaching mode is, here is a type of problem, here is the way to solve it, and here is the one answer you should get for this problem, and this is how you’re going to be assessed. Or, here is a problem--you might have some flexibility in the way that you solve it, but there’s only one correct answer. So, sometimes our teaching becomes relegated to this approach. However, there is so much more to learn in STEM, other than specific problem types with prescribed procedures that lead to one right answer.
There are many complex or scenario problems that have multiple valid approaches and pathways to a valid solution, and it's important to provide students opportunities to approach rich problems collaboratively and with scaffolded, just-in-time guidance, to teach them how to find approaches and pathways to valid solutions, and not teach only problems we already know there’s one correct answer to, and ask them to find that one correct answer.”
Because of the limitations that STEM educators shoulder in large classes and because student responses can be measured with a single response, the ease of summative multiple-choice exams is tempting. Best practices can feel impossible in these situations.
But what can be done to increase teaching efficacy and student learning outcomes when it comes to assessment across both ELA and STEM subject areas? What are some tools to help educators uphold best practices in assessment?
“I don’t really see the need for a difference in assessment between ELA or humanities and the STEM classes,” said Major. “Faculty teaching those disciplines could be using the same type of assessments successfully. Creative, formative assessments could give students an opportunity to demonstrate their learning of content in a way that seems more natural. Learning could be more like play and possibly a social endeavor. I wonder if a more creative way of determining whether their learners have met course learning objectives is possible in [STEM] courses with higher numbers of students.”
Miller responded, “Absolutely, and even though a portion of STEM classes teach from a problem type -- procedure -- one correct answer approach, there is another segment of STEM teaching that has developed curricula of rich problems that help students develop a deep understanding of concepts. These problems often provide a scenario or pattern and ask students to notice and think first, and then to conjecture, test, revise, confirm, and justify their solutions, even in groups.”
“When classroom content and activities are set up so that students are practicing those skills, then the assessments in those classes are created so they are doing those same things,” continued Miller. “Some technology tools have also been developed to also help teachers and students practice this approach to learning. Desmos and GeoGebra are two of the STEM tech tools that provide students opportunities to work creatively with multiple paths to valid solutions, as opposed to giving all students only the same set of simplistic problems and expecting a specific procedure to produce or select one correct solution. It’s important to provide students more rich problems that require some thinking, application, analysis, evaluation, and can even be solved with some creativity.”
Another possible solution for assessment is Gradescope, which upholds frequent, low-stakes assessments that enable insights into student learning all while saving time.
Assessment upholds the bridge between teaching and learning, per Dylan Wiliam (Assessment: The bridge between teaching and learning, Voices From the Middle, 2013). And despite differing assessment formats and class sizes between ELA and STEM, assessments must provide insights into student learning and teaching efficacy.
Miller said, “There should be a strong relationship between ELA or humanities and STEM teaching and assessment. Part of the evidence-based field of STEM teaching and assessment is that STEM is not just about solving specific kinds of problems, but about explaining your thinking and justifying your solutions. It’s presenting your solution in a way that someone else can understand it and believe it. So, that’s like a melding of the humanities and STEM, when you are teaching your students how to communicate their problem approaches and solutions.”
Major offered, “It would be really interesting if assessments that stimulate divergent thinking, the generation of new ideas or original solutions, were incorporated more frequently into STEM courses. Humanities doesn’t have a corner on the market for encouraging learners to become critical and creative thinkers. These divergent types of assessments (as examples, prompting students to adapt a solution, combine ideas, eliminate inefficiencies, or brainstorm) if contextualized to an applicable setting, can additionally bridge learning to other settings and enable students to meet performance expectations beyond the classroom.”
2020 was a disruptive time; education was no exception, but it’s also been a time of great innovation. Educators everywhere had to reevaluate their lesson plans, their pedagogy, and their assessments, and recalibrate; and to what did they recalibrate? Best pedagogical practices. And we’re gratified to share with you Roslyn Miller and Amanda Major’s recommendations and move towards assessment with integrity as well as creativity.
We recently had the privilege of conversing with Major and Miller via Zoom to discuss assessment challenges,...
Upholding assessment with integrity--administering tests, quizzes, and assignments that accurately measure student...
Assessment is not an endpoint but a junction within the student learning journey. So after your students have...
Turnitin blog posts, delivered straight to your inbox.