CreateDebate



Welcome to CreateDebate!

CreateDebate is a social tool that democratizes the decision-making process through online debate. Join Now!
  • Find a debate you care about.
  • Read arguments and vote the best up and the worst down.
  • Earn points and become a thought leader!

To learn more, check out the FAQ or Tour.



Be Yourself

Your profile reflects your reputation, it will build itself as you create new debates, write arguments and form new relationships.

Make it even more personal by adding your own picture and updating your basics.


FB
Facebook addict? Check out our page and become a fan because you love us!


pic
Report This User
Permanent Delete

Allies
View All
None

Enemies
View All
None

Hostiles
View All
None

RSS PeterVerdin

Reward Points:3
Efficiency: Efficiency is a measure of the effectiveness of your arguments. It is the number of up votes divided by the total number of votes you have (percentage of votes that are positive).

Choose your words carefully so your efficiency score will remain high.
100%
Arguments:3
Debates:0
meter
Efficiency Monitor
Online:


Joined:
3 most recent arguments.
1 point

Summary of argument for program level assessment:

Programmatic assessment has been a longstanding and integral part of any academic institution, whether implemented through formal or informal methods. To establish an effective process thereby keeping costs down, and utilizing everybody's time efficiently an assessment plan must be an intentional process that is organized so all stakeholders can implement the desired plan. A plan designed with the end in mind, student learning, incorporates all the different all of the different phases of learning, while making the learning process both fun and applicable to real life. Whether required from an institution or an accrediting body, programmatic assessment gives quality control to the educational process by giving teachers goals to work towards within their individual classes. For schools that are seeking quality improvement, programmatic assessment gives leverage in increasing student retention and graduation rates both of which convey an indirect measure of student engagement and learning within the classroom environment. By avoiding paralysis of data analysis, it allows educators to alter the learning environment in a way that shows where students are exceeding in content mastery and where educators need to improve in transferability of content to promote learning. Without facilitating some measure of how students are performing, we are unable to determine what the quality of learning has been within each respective program. Even though program assessment may not be glamorous and time consuming, having an efficient, organized, broad, and student-centered plan will pay dividends in creating students are learning what is needed to be successful and competent citizens in our highly educated society.

1 point

Despite our counterparts claim that program-level assessment has shown a lack of evidence we have clearly shown that there is sufficient evidence for the use of these types of assessment. Looking at it from the student's point of view we would be hard pressed to attend a program that did not provide an assessment to show agreement between courses and into the larger spectrum of the programmatic learning goals. Lack of programmatic assessment has the potential to lead to choppy and incongruent curriculum, something we have all experienced.

Our counterparts argument centered around the fact that there is no correlation between programmatic assessment and outcomes, however as they also mentioned it is because this is often not done well. When done well, programmatic assessment provides the true north for the entire curriculum. It has been proven that when programmatic assessment and curriculum design is student centered, that students have a higher level of satisfaction of being in the classroom, and when all stakeholders are aware of what the learning outcomes are, then the entire assessment process less burdensome and easier to implement.1 Students who are engaged in learning activities connecting broad concepts to real life, enjoy learning to a higher degree, thereby increasing retention and graduation.1

With regards to a no zero policy example, it is clear this should remain but by using a minimum failing grade such as a 50%, or alternatively, an incomplete instead of a "0.” This still fails the student, but still allows for students who are lower achievers, or struggling students an opportunity to still pass a class. A zero makes it harder for students who struggle to achieve a passing final score on the test.2

In summary, when done effectively, program level assessment aligns curriculum serving as the glue to any quality program.3

Reference List:

1. Lambert AD, Terenzini PT, Lattuca LR. More than meets the eye: Curricular and Programmatic Effects on Student Learning. Research in Higher Education. 2006;48(2):141-168. doi:10.1007/s11162-006-9040-5

2. Carifo J, Carey, T. The Arguments and Data in Favor of Minimum Grading. Mid-Western Educational Researcher. (2013) 25, 4 19-30.

3. Allen MJ. Assessing Academic Programs in Higher Education. Academic Leader. 2004;20(5):8. http://search.ebscohost.com.rmuohp.proxy.liblynxgateway.com/login.aspx?direct=true&db=ehh&AN=12954219&site=eds-live&scope=site. Accessed June 17, 2019.

1 point

If done poorly, program level assessment can suffer the same pitfalls as any other type of assessment—low validity, low reliability, time consuming, frustrating for teachers and students, costly, etc.1 However, done properly, program level assessment ensures course-level alignment, cohesiveness of the program, confirms culminating levels of student competence, and ensures that programmatic objectives are met.1-2 This argument will focus on how to do it well in order to maximize its benefits.

Program level assessment should follow the same principles of summative course level assessment, using a backwards designing process. Developing a program level assessment plan includes four phases3:

1. Reaction: Provides students a voice in the content.3

2. Learning: Instructors establish the use of different tools including (i.e. rubrics) to gauge the learning that occurs.3 Also in this phase is implementing feedback into the learning process, particularly “feed forward” feedback where students can gather feedback throughout the learning process to help them further develop their thoughts and mastery of the content.4

3. Behavior: Teachers take the established program outcomes and apply them into learning activities that students can apply to their own experiences.3

4. Results: Provides a bridge of what students are learning didactically to what they are performing outside of the classroom.3

The University of Maryland University Honors College (UMUC) Graduate School used similar methods to provide data results on program level SLOs every semester, improving intra rater reliability significantly from previous years.5 This addresses the argument that program assessment is too cumbersome, not timely, and varies from rater to rater.5 The success at UMUC provides a clear example of how, when done properly, program level assessment can be nimble, timely, and consistent, providing clear benefits to student learning.

Reference List:

1. Knight PT. The Value of a Programme-wide Approach to Assessment. Assessment & Evaluation in Higher Education. 2000;25(3):237-251. doi:10.1080/713611434.

2. Allen, Mary J. Assessing academic programs in higher education. Bolton, MA: Anker Publishing Company; 2004.

3. Praslova L. Adaptation of Kirkpatrick’s four level model of training criteria to assessment of learning outcomes and program evaluation in Higher Education. Educational Assessment, Evaluation and Accountability. 2010;22(3):215-225. doi:10.1007/s11092-010-9098-7

4. Hernández R. Does continuous assessment in higher education support student learning? Higher Education. 2012;64(4):489-502. doi:10.1007/s10734-012-9506-7

5. Khan, R., Khalsa, D.K., Klose, K., & Cooksey, Y.Z. (2012). Assessing graduate student learning in four competencies: use of a common assignment and a combined rubric. Research & Practice in Assessment, 7 (winter), 29-41.

PeterVerdin has not yet created any debates.

About Me


I am probably a good person but I haven't taken the time to fill out my profile, so you'll never know!


Want an easy way to create new debates about cool web pages? Click Here