PDF version of this newsletter
Past issues of Electronic AIR since 10/27/1993 are available at (Electronic AIR archives)
Interview with Curt Naser
In this feature, we summarize the results of an “electronic
interview” with an individual institutional research practitioner. Our goal is to
foster broader knowledge and appreciation of the diverse membership of AIR, and
of the different professional contexts and activities in which we are engaged. In
this issue, we interview Curt Naser, Associate Professor of Philosophy and Facilitator
for Academic Assessment at Fairfield University (email@example.com).
e-AIR: Please start by telling us a bit
about Fairfield University.
Curt: Fairfield University has 3000 full
time undergraduates and 2000 graduate and continuing education students. We are
a “small comprehensive university” and offer undergraduate degrees in the Liberal
Arts and Sciences, Business, Engineering and Nursing. We also offer graduate degrees
(masters) in Business, Engineering, Nursing, Mathematics, Creative Writing and American
Studies. We are located in Fairfield, CT just outside Bridgeport, CT and an hour
train ride from New York City.
Fairfield is one of the 28 Jesuit colleges and universities
in the United States. The Jesuit mission of the University focuses on academic excellence
and the idea of “men and women for others”. So we stress not only the importance
of academic work, but are very interested in practical service to the community.
The Jesuit mission doesn’t inform so much “what” we do as “how” we do it. Though
obviously we have a strong religious and Catholic background, the University goes
out of its way to welcome all faiths and perspectives in a vigorous open dialogue
about the wide range of issues in scholarship and in the world.
e-AIR: You hold two positions at Fairfield:
you are an Associate Professor of Philosophy and the Facilitator for Academic Assessment.
Let’s start by talking about your faculty responsibilities.
Curt: I am a philosopher. I wrote my
dissertation on Kant and Hegel (German Idealism) at Stony Brook University. While
I was writing my dissertation and reached the end of my fifth year of funding as
a graduate student, I was asked to fill in teaching medical ethics at the Stony
Brook Medical School. That work turned into a full time job within a matter of months
and I spent four years at the medical school teaching medical ethics.
I was hired at Fairfield University primarily as a medical
ethicist. Although I have done less teaching since getting involved in assessment,
I continue to teach one course per semester – usually an ethics course. Most of
my published work has been in the ethics of research involving human subjects –
much of it oriented toward the ethics of research involving medical records, human
tissue samples and genetics.
e-AIR: How did you become involved in
Curt: I have a rather peculiar route
into assessment. Starting in 1999 I began dabbling in database to web programming.
I initially built a little online quizzing system for my own classes. Over the next
couple of years I built several online systems, including my own course management
system. In 2003 I got introduced to Cold Fusion, a web server scripting language.
I rebuilt my course management system in Cold Fusion and integrated it with a system
built to manage faculty appointments to the committees they serve on.
In 2004 I provided my course management system to a faculty
member in our Dolan School of Business, Professor Michael Tucker in the Finance
department. It turns out he was (and still is) the chair of the Continuous Improvement
and Assessment Committee for the business school. They were looking for online tools
to assist them in managing their assessment program, which is required for AACSB
accreditation. They wanted a method to link assignments in courses to programmatically
defined learning objectives and to then be able to randomly sample student work
from those assignments by learning objective. Then they wanted to apply assessment
rubrics to the sample artifacts and crunch the numbers.
They talked with WebCT, Blackboard, and a number of student
portfolio vendors and none of them, at that time, could do what they were looking
for. So they asked me if I could make my course management system do this. Foolishly,
I said yes, and commenced to rewrite my system so that it could handle 200+ courses
and a thousand students. I had some major work to do to get the system to handle
a lot of traffic as well as integrate the assessment methodologies into the system.
I spent a summer rewriting the system, building it on top of a MySQL database. I
had to write scripts to populate the system each night with downloads from Banner
and I had to create functions that a lot of different faculty wanted in a course
So my first introduction to assessment of student learning
was through this experience of programming assessment methodology into an online
course management system. What really convinced me about assessment, however, was
a two day assessment seminar sponsored by AACSB that I attended with Professor Tucker.
The person leading that seminar was Doug Eder of Southern Illinois University at
Edwardsville. Doug is a biologist and I thought it was interesting that a biologist
was teaching schools of business how to do assessment. After about the first half
day of the workshop, I was sold. The fundamental question that assessment of student
learning asks is simply, “Are our students learning what we profess to be teaching
them?” This doesn’t seem like an unreasonable question to ask and it seems to me
that faculty have a responsibility to answer this question. So I basically learned
about academic assessment from Doug Eder and the project of programming assessment
e-AIR: You call your course management
system “Eidos.” What does Eidos mean? And what does it do?
Curt: “Eidos” is the Greek word for “idea”
and is often translated as “form”, as in the “Platonic Forms,” those eternal ideas
that are the reality behind sensible appearances according to Socrates. I used the
term “Eidos” because the applications I build are interactive web systems, and the
way users interact with a web system is through “forms” … I know, it’s a bad (philosophical)
pun. It also has the resonance of the distinction between appearance and reality,
and that raises the whole question about the reality of web pages that are generated
by a dynamic system.
The Eidos system is comprised of four major pieces. There
is the course management/assessment system, and that is the 800 pound gorilla. But
there is also a document management system, a faculty activity reporting system,
and a bunch of reports and analyses of institutional data.
Though the course management/assessment system went live
for our business school in the Fall of 2005, I really haven’t stopped developing
it. We now have over 370 courses running in the system with over 3500 students using
it this semester. Courses are in all 5 schools of the University (Business, Engineering,
Arts & Sciences, Education and Continuing Education) and each of the professional
schools is using the program assessment features.
I first created the assessment system for the business
school’s AACSB accreditation. It works like this. The business school faculty input
their assignments, check off which programmatic learning objectives apply to each
one, and ask their students to submit their work electronically. The Eidos system
archives that student work indefinitely (we have archived over 100,000 student papers,
spreadsheets, presentations, and even java programs, in Eidos since the Fall of
2005). The assessment committee can query this database and retrieve a random selection
of papers whose assignments have been linked to a selected learning objective. Other
parameters from Banner or the course management system -- semester, subject code,
course level, student year, student major, etc. – can be used as a selection criterion.
Before selecting the student artifacts, the assessment
committee writes a rubric to assess the learning objective. That rubric is entered
into the Eidos system and committee members use it to evaluate each student artifact
selected. Once all the artifacts have been evaluated by the rubric, the Eidos system
generates a report showing the average trait score and a detailed distribution of
trait scores along with an inter-rater reliability score.
As I built this system I realized the same assessment
machinery could be applied at the classroom level: in short, I set it up for any
instructor to define any rubric they wish and apply that rubric to electronically-archived
or hard copy versions of student work. Such rubrics play a dual role: they generate
a grade for the artifact and they generate classroom assessment data for that particular
assignment. They also make it very clear to the students what the expectations are
for the assignment, if the instructor chooses to display it to them before they
write it, and of course it makes the grading criteria clear when they get the rubric
scores back. The same “rubric engine” can also be used to assess student competencies
and skills directly without the mediation of written artifacts. This method of assessment
is employed by our schools of Engineering, Nursing and Education.
I created the document management system for our NEASC
accreditation visit. It currently houses around 3000 documents…meeting minutes,
policies, reports, etc. … anything that we collected for our accreditation visit,
or that individual schools or departments are collecting for their own visits. One
of the key features of the system is that it is based on “virtual directories or
folders.” Folders can be created within folders, moved to other folders, and cross-listed
to other folders, as can files themselves. This allows us to easily make one file
or folder appear in multiple folders, as needed.
For our NEASC self-study, we created folders for each
accreditation standard and linked files and folders from all over the document system
into the appropriate standards folders, thus making all of the evidence for each
accreditation standard immediately available for the visiting team. We even put
the accreditation standards themselves into the database and linked files and folders
into each accreditation standard and sub-paragraph.
Our goal was to have a very well educated visiting team.
We wanted them to do all their reading ahead of time so that we could engage them
in conversations about the issues we had identified as important, either because
we were successful at them or because we recognized we needed to improve on them.
We gave our visiting team access to the full document system eight weeks before
their visit. This strategy paid off. One visiting team member came so well prepared
that when he met with our College of Arts & Sciences assessment committee he
was able to report to them statistics from our own student surveys that most of
the faculty present at the meeting didn’t even know existed. This gave him and the
visiting team instant credibility and it gave us a final report that really helped
us move the institution forward.
The faculty activity reporting system is based on a bibliographic
database of faculty publications, presentations, creative work, grants and service.
The activity reporting system can generate annual reports, three year reports and
a full CV, and aggregate reports at the department and school levels. The titles
and keywords of all entries are indexed nightly so that the system is fully searchable.
The Eidos system is integrated with our Banner student
information system and has a wide range of data that are refreshed nightly. There
are data analysis reports such as grade analyses that look at aggregate GPA averages
by school, department, subject code and down to individual courses. There are student
enrollment reports, graduation reports, and a set of reports to pull individual
student records based on a variety of search criteria. For instance, one report
will display all the students who have “X” number of part time faculty teaching
their courses in a given semester. Another report can pull all the students with
X number of F’s, or D’s or C’s from a given semester.
In addition to the above major components of Eidos, I
also built some other applications for the University, including an online IRB system,
jobs bulletin board, student internship management system, a Phi Beta Kappa nomination
system, and a bunch of other little applications.
We have had great success with the Eidos system. It has
been a central part of our accreditation efforts. Eidos was cited as a best practice
by the AACSB visiting team for our most recent Maintenance of Accreditation visit,
and was featured in an Advisory Board report to their Council of Provosts. I have
presented on the Eidos system and assessment at a number of conferences and the
repeated refrain I hear is “When will it be commercial?” I am happy to say that
a Connecticut company, Axiom Group, is developing the Eidos system for commercial
use; it will be marketed under the trade name “Mentor.” I am a partner in that enterprise,
which has put me in the odd position of negotiating to license my own software from
the University and myself. We should be entertaining customers no later than the
first of the year, though we are looking for a couple of early adopters to get us
off the ground.
e-AIR: What do your responsibilities
as Facilitator for Academic Assessment entail?
Curt: My first responsibility as Facilitator
for Academic Assessment is to work with each of our schools on their assessment
projects. I maintain and administer the Eidos system, and I work with each of the
Dean’s and their assessment committees to develop and use the assessment tools in
Eidos. I do a lot of training of faculty in the course management aspects of the
system and this allows me to learn about the teaching methods in a wide range of
Each of our professional schools, because of their accreditation
requirements, has had a well developed assessment program in place and so I work
as a consultant to those schools. Our College of Arts & Sciences, on the other
hand, has been, shall we say, a bit slower to buy into assessment. I have chaired
the assessment committee for the College since 2007. We have a representative from
each of the 15 departments in the College, whom we have trained in assessment methods.
We started out focusing on the core curriculum and over the past two years departments
have been developing and executing pilot assessment programs. Not everyone buys
into assessment or moves at the same pace through the process. We have some departments
that are enthusiastic and are doing very good work, and others that are dragging
their feet and trying to do as little as possible.
I also work with our Student Affairs division. We started
an interesting project a year and a half ago. We asked our RA’s to assess their
residents’ engagement and their relationship to the RA using a 9 trait rubric. We
gave each RA a course in the course management system. We asked the RA’s to go into
their course, find the links to the rubric for each student and complete the rubric.
In the pilot test, we only did the freshman floors and got 675 students assessed
by their RA’s out of 800 first year students that year. The following semester we
got over 1700 freshmen, sophomores and juniors assessed by their RA’s. The beauty
of an electronic assessment system is that it cost us absolutely nothing to create
these courses, deploy the rubrics, collect the data, and generate summary reports.
We are presently linking the data up to some of the national survey data we have
to see what relationship the RA assessments have to student self-reports on surveys.
An interesting benefit of this project was that the RA’s loved having the tools
of a general course management system at their finger tips. They get a class email
form, a place to post documents, discussion board, etc. We are now also using the
Eidos system to support our FYE courses.
e-AIR: What have been your greatest challenges
related to assessment?
Curt: There are a couple of challenges:
First, as a faculty member and philosopher, some of my colleagues wonder about me.
Why am I doing database programming and building web systems? This doesn’t look
like scholarship… True enough, but I have always felt that this work was just doing
philosophy otherwise. I would suggest that there is only one problem in philosophy:
that of “the one and the many.” Database programming is, of course, all about the
one and the many: one student, many courses; one course, many assignments; one school,
many departments; and so on. And programming is all about inferential logic. This
does not always convince my colleagues but they tolerate me because the system generates
useful information. My Dean describes me as the guy who shows up at the doctor with
a chicken stuck to his head. The doctor says that he can remove it, but the patient
objects, citing the good eggs the chicken lays….
Also, the course management system and its integration
of assessment methodologies is a hybrid of teaching, scholarship and service. Operationalizing
the scholarship of teaching and learning in an online system opens new doors on
the assessment process and is itself an advancement of teaching methods. I won’t
say this is philosophical research, but it is pedagogical research in the Scholarship
of Teaching and Learning tradition.
The bigger challenge, however, is one that all of us in
assessment of student learning face: faculty are often quite resistant to assessment.
By and large, faculty in professional schools recognize that assessment is necessary
for accreditation, and that assessment actually has some usefulness in their programs
and teaching. Arts and sciences faculty, however, have rarely encountered accreditation
more often than the once every 10 years visit from regional accreditors. They can
view assessment in a variety of negative ways: something that K-12 schools do, a
threat to academic freedom, an attempt by administrators to evaluate their teaching
performance, uncompensated extra work, the attempt to measure what is immeasurable,
and so on. And those who share these and other objections are often quite animated
and vehement in their rejection of assessment.
This makes assessment hard! I think anyone who has attempted
to move assessment forward in an arts and sciences faculty has some battle scars
and has learned various strategies to work around, deflect, and address these criticisms.
It is not easy, but with patience and kindness, and the support of the key academic
administrators (Deans and Provosts), assessment can and will happen in the humanities,
social sciences and sciences.
One other problem that is peculiar to me (I assume most
assessment folks don’t program their own database to web systems…): the work I do
with the Eidos system, whether it be in assessment or collecting, analyzing and
making available boat loads of institutional data, is an agent of change. It crosses
traditional boundaries (IT, IR, teaching, reporting, etc…) and this makes some people
nervous. The advantage I have had is that I approach my work as a scholar: I am
turned on by questions and challenges and I explore a lot of things for the fun
of it. I also am a teacher and bring over 20 years of teaching experience to my
work in assessment, IR, and programming. That is probably a fairly unique combination.
But institutional change is hard and whether it is introducing assessment or putting
data that has long been hidden on dusty shelves in everyone’s hands, there are always
those who resist change. One needs lots of patience for those who resist the change,
ears to hear their legitimate points of contention, and the support of those who
e-AIR: What words of advice would you
offer to an institution that is just beginning to launch assessment efforts?
Curt: A couple of useful tips:
I want to follow up on number 4 above with an anecdote.
We had a two year process to introduce our Arts & Sciences faculty to assessment.
We started with a two day workshop with Doug Eder; we got faculty representatives
from each department to work with their departments on articulating core learning
objectives and then we worked through methods for how to assess these learning objectives.
We had good conversations and bad conversations as faculty expressed their reservations
but still engaged in the pedagogical issues. About 18 months into the process, we
did a workshop on writing rubrics. A lot of faculty have misconceptions and prejudices
about rubrics and it was a sore point with them. But we finally sat down, about
25 of us from all departments in the College, and we handed out two student papers
from my own ethics class. One was decent, the other needed some work. The participants
read the papers and we just had them respond to the group at large: what did they
like in each paper, what didn’t work well, and so on. We all do this for a living
so it comes naturally! We then asked them to think about what the elements are of
a good paper in this area. They came up with a variety of “traits” that they thought
were important. At one point, a very skeptical faculty member raised her hand and
shouted out, “We should have my whole department here, this is so useful!” That
was the tipping point. Most folks present began to get a sense of the usefulness
of assessing and the usefulness of the process. If I had it to do over again, I
would have started out the whole process with this workshop. You don’t even need
to call it assessment. You don’t need any theory or methods. Just get some faculty
together and reflect on the quality of some concrete student work. The methods and
theory can follow.
e-AIR: Let’s close by talking about your
life outside of work. I understand you enjoy the outdoors and are an avid hiker.
Please tell us about these and other interests.
Curt: Well, last year my then 10 year
old son did 180 miles of hiking. I think I was up around 210 miles, as I did some
hikes when I went away to conferences that he couldn’t go on. We will have hiked
around 150 miles by the end of this year. I run a hiking group that was started
by a teacher at my kids’ school, but it has evolved over the years into more of
a hiking group here at the University. We started taking students from the University
out last year and students this year are getting FYE credit for hikes. There are
some glorious places in CT, MA and NY to hike. We spend a lot of time in the Shawangunk
Mountains in NY. I consider it the Disneyland of hiking … it has some of the coolest
rock formations, crevices and cliffs to explore.
Hiking is probably the only extended time I am away from
my computer. With 4000 people using the Eidos system, I sort of have to keep tabs
on things and respond to requests for help in a timely manner.
I do play guitar … a bit of a hack, but it is fun. That
comes from my long history as a Grateful Dead head, having been to at least 100
Dead concerts (avid even by Deadhead standards!).
e-AIR: Curt, thank you so much for participating
in this interview!
We welcome your feedback on this feature, including suggestions
for individuals to be interviewed and questions you would like to have posed in
future interviews. Please e-mail your comments and suggestions to Marne Einarson
Copyright © 2009
Copyright © 2009 Association for Institutional
Research. All Rights Reserved.
1435 E. Piedmont Drive, Suite 211 | Tallahassee, FL 32308 | 850-385-4155 | Fax: