What Is Outcomes Assessment? A Practitioner's Guide for Higher Education
TL;DR. Outcomes assessment is the systematic process by which a college or university decides what students should know and be able to do, gathers evidence of whether they actually do, and uses that evidence to improve teaching and programs. It is the operating system of academic accreditation. It happens at three levels (course, program, institution), uses both direct and indirect evidence, and ends each cycle with documented changes called closing the loop.
Contents
1. Definition
Outcomes assessment is the systematic, repeated process of: stating what students should know and be able to do; gathering evidence of student performance against those statements; analyzing the evidence; and making documented changes to teaching, curriculum, or institutional practice based on what the evidence shows.
It is sometimes called student learning outcomes assessment, institutional effectiveness, or simply assessment. The exact phrase varies by region and accreditor; the underlying activity is the same.
It is not the same as grading. Grading evaluates an individual student's performance in a single course. Outcomes assessment evaluates the program or institution's success in producing the learning it promises, aggregated across cohorts.
2. The outcomes assessment cycle
Practitioners describe the assessment cycle in slightly different terms, but the steps are consistent across accreditors:
- Articulate outcomes. Write clear, measurable statements of what students should know or be able to do at the course, program, and institutional level.
- Map the curriculum. Show which courses introduce, develop, and demonstrate each outcome. The output is a curriculum map that links courses to program outcomes to institutional outcomes.
- Plan the measurement. Decide what evidence will be gathered, when, by whom, and against what standard. Pick rubrics or scoring instruments.
- Gather evidence. Collect student artifacts (papers, projects, performances, exam items, capstones), score them against the rubric, and aggregate the results.
- Analyze the evidence. Interpret the aggregated data. Compare against your stated standard. Identify gaps.
- Act on the evidence (close the loop). Document a specific change to teaching, curriculum, advising, or institutional practice. Tie that change to a measurable outcome.
- Verify in the next cycle. Re-measure the same outcome after the change. Did the evidence improve? Repeat.
The cycle is intentionally a loop, not a line. Accreditors look for evidence that the loop has actually closed, repeatedly, over time.
3. Types of outcomes
Course-level outcomes (SLOs / Student Learning Outcomes)
SLOs describe what a student should know or be able to do by the end of a single course. They are measurable, written in active language, and tied to specific evidence (assignments, exam items, performances).
Example: "By the end of System Administration I, the student will be able to install and configure Active Directory Domain Services on Windows Server, including DNS, DHCP, Group Policy, and OU structure."
Program-level outcomes (PLOs)
PLOs describe what a graduate of a degree, certificate, or major program should know or be able to do. PLOs are usually broader than SLOs and are demonstrated across multiple courses.
Example: "Graduates of the Information Technology Associate of Applied Science program will design, deploy, and maintain a small-business network including identity, file/print, email, and security services."
Institutional outcomes
Institutional outcomes describe what every graduate of the institution should be able to do, regardless of major. They are sometimes called core outcomes, institution-wide learning outcomes, or general education outcomes.
Example: "Every graduate of this institution will communicate technical information clearly to non-technical audiences, both in writing and in speech."
4. Direct vs indirect assessment
Accreditors expect both kinds of evidence, with direct evidence carrying the most weight.
Direct assessment
Direct assessment measures actual student performance against the outcome. The evidence comes from student work itself.
- Capstone projects scored with a rubric
- Standardized exam items mapped to outcomes
- Performance assessments (a student installs and configures a server while being observed)
- Portfolio reviews
- Industry certification pass rates tied to specific outcomes
Indirect assessment
Indirect assessment captures perceptions, self-reports, or proxies. It is a useful triangulation but cannot stand alone.
- Graduating-student surveys
- Employer surveys
- Alumni surveys
- Course evaluations
- Job placement rates
5. Closing the loop
Closing the loop is the moment in the cycle where data becomes a documented change. It is the single step accreditors look for hardest, because it is the step institutions most often skip.
A complete closing-the-loop record contains four parts:
- What we found. The aggregated assessment data and what it shows.
- What we will do differently. A specific, named change to teaching, curriculum, advising, or institutional practice.
- Who is responsible and by when. A name and a date.
- How we will verify. Which outcome we will re-measure next cycle to test whether the change worked.
Generic statements like "we will continue to monitor" do not satisfy accreditors. The change must be specific and verifiable.
6. What each U.S. regional accreditor expects
The seven U.S. regional accreditors (six recognized commissions and the federally recognized faith-based body) all require systematic, ongoing outcomes assessment. The phrasing varies, but the substance is the same: What do you say students will learn? Show me the evidence that they did. Show me what you changed because of the evidence.
| Accreditor | Region | Key assessment expectation |
|---|---|---|
| HLC (Higher Learning Commission) | North Central / Midwest, including Missouri | Criterion 4.B requires "the institution demonstrates a commitment to educational improvement through ongoing assessment of student learning." Documented closing-the-loop is reviewed at every comprehensive evaluation. |
| SACSCOC | Southern | Standard 8.2.a requires identification of expected outcomes for each educational program, ongoing assessment, and use of results for improvement. Standard 8.2.b extends the same requirement to academic and student support services. |
| MSCHE (Middle States) | Mid-Atlantic | Standard V (Educational Effectiveness Assessment) expects clearly stated outcomes, organized assessment processes, and evidence of use of results. |
| NWCCU | Northwest | Standard 1.C / 1.D require defined student learning outcomes for each program and evidence of systematic assessment producing meaningful improvement. |
| WSCUC | Western (senior colleges) | Standard 2 (Achieving Educational Objectives) and Standard 4 (Creating an Organization Committed to Quality Assurance, Institutional Learning, and Improvement) require demonstrated assessment of program-level learning outcomes, with results used in decision-making. |
| ACCJC | Western (community colleges) | Standard II.A.3 requires identification of student learning outcomes for courses, programs, certificates, and degrees, and assessment of student achievement of those outcomes. Standard I.B emphasizes use of assessment data for institutional improvement. |
Specialized accreditors (ABET, AACSB, CAEP, ACEN, CCNE, ACBSP, and others) layer additional, more specific requirements on top of the regional accreditor's expectations. AtlasOA supports the regional accreditors directly and supports outcome-to-criterion mapping suitable for most specialized accreditors.
7. Common pitfalls and how to avoid them
Pitfall 1: Outcomes that are not measurable. "Students will appreciate the importance of cybersecurity" cannot be measured. Replace with "Students will identify three categories of common attacks and explain a mitigation for each."
Pitfall 2: One person doing all of it. If outcomes assessment lives entirely with one assessment director, it dies when that person leaves. Distribute the work to faculty leads at the program level.
Pitfall 3: Collecting data and never closing the loop. The most common failure mode. Build a workflow that does not let you mark a cycle "complete" without a closing-the-loop record.
Pitfall 4: Counting course grades as outcomes evidence. Course grades blend many outcomes plus participation, attendance, and effort. Accreditors specifically call this out as insufficient. Use rubric-scored artifacts that map to specific outcomes.
Pitfall 5: Surveys without direct evidence. Indirect evidence (surveys, perceptions) is fine for triangulation. It cannot be the only evidence. Pair every PLO with at least one direct measure.
Pitfall 6: Changing what you measure every cycle. Accreditors want trend lines. Pick a measure, stick with it for at least two cycles, and watch the data move.
Pitfall 7: Treating assessment as a once-a-decade reaccreditation sprint. Continuous, low-friction assessment that produces a documented change every term beats a frantic six-month evidence-collection sprint right before the site visit. Build a cadence you can sustain.
8. Tools used (general categories)
Institutions use one or more of these tool categories to run outcomes assessment:
- Dedicated outcomes assessment platforms. Examples: AtlasOA, Watermark Outcomes Assessment Projects, Nuventive Improve, Anthology Outcomes (formerly Compliance Assist), Weave, AEFIS, eLumen.
- LMS-native rubric tools. Canvas, Blackboard, Moodle, and D2L Brightspace include rubric scoring tied to outcomes. These work for course-level data collection but rarely roll up cleanly to program- or institution-level reports.
- Spreadsheets and shared drives. Many small institutions still run assessment on a folder of spreadsheets. It works, but it is fragile and does not scale into accreditation evidence.
- SIS-based reporting. Banner, Workday Student, Jenzabar, Colleague, and PeopleSoft can produce some assessment evidence (grades, certifications, completion) but are not designed for rubric-scored outcome evidence.
The right answer for most technical and community colleges is a dedicated platform that pulls from the SIS, integrates with the LMS, and produces accreditation-ready reports. AtlasOA is one such platform; the comparison pages describe the full landscape.
9. Glossary
- Accreditation
- The process by which an institution or program is reviewed and approved by a recognized accrediting body. U.S. higher education has regional accreditors (six commissions) and specialized accreditors (program-specific).
- Articulation
- The act of writing clear, measurable outcome statements.
- Closing the loop
- The step in the assessment cycle where data is converted into a documented change to teaching, curriculum, or institutional practice.
- Curriculum map
- A matrix showing where each program-level outcome is introduced, developed, and demonstrated across the courses in a program.
- Direct evidence
- Evidence from student work itself (artifacts, performances, exam items) measured against an outcome.
- Indirect evidence
- Evidence about student learning gathered from surveys, self-reports, or proxies (placement rates, course evaluations).
- Institutional effectiveness
- An umbrella term that includes outcomes assessment plus broader institutional planning, resource allocation, and operational metrics.
- Institutional outcomes
- Outcomes that every graduate of an institution should demonstrate, regardless of major.
- Program-level outcomes (PLOs)
- Outcomes that every graduate of a specific degree or certificate program should demonstrate.
- Proficiency scale
- A scale (often 1-4 or 1-5) used in a rubric to describe levels of student performance, from emerging to exceeding.
- Rubric
- A scoring tool that defines criteria, levels of performance, and descriptors for each level. The instrument used to convert student work into measurable outcome evidence.
- Self-study
- The institutional document submitted to an accreditor at a comprehensive review, summarizing how the institution meets each accreditation standard.
- Student learning outcomes (SLOs)
- Outcomes at the course level. What a student should know or be able to do by the end of a single course.
- Triangulation
- Using multiple measures (direct + indirect, multiple direct measures, etc.) to corroborate a finding.
10. Next steps
If you are starting outcomes assessment from scratch, the most important first move is to articulate program-level outcomes for your top-enrollment programs and build the curriculum map for each. Everything else (rubrics, evidence collection, closing the loop) flows from there.
If you already have outcomes and are looking for a tool to run the cycle, the AtlasOA product page describes the platform, the comparison pages describe the landscape, and the FAQ answers the questions practitioners ask most often.
If you have a specific question this page did not answer, email Ashley. Real questions get real answers.