Bob Jensen's Threads on Assessment

Bob Jensen at Trinity University

George Carlin - Who Really Controls America --- Click Here
"More kids pass tests if we simplify the tests --- Why education will never be fixed."

The Downfall of Lecturing

Teaching Evaluations and RateMyProfessor

Concept Knowledge

Onsite Versus Online Differences for Faculty

Online Versus Onsite for Students.

Onsite Versus Online Education (including controls for online examinations and assignments)

Student Engagement

Online Education Effectiveness and Testing

What Works in Education?

Ptedictors of Success

Team Grading

Too Good to Grade:  How can these students get into doctoral programs and law school if their prestigious universities will not disclose grades and class rankings?  Why grade at all in this case?

Software for faculty and departmental performance evaluation and management

School Assessment and College Admission Testing

Civil Rights Groups That Favor Standardized Testing

Computer Grading of Essays

Assessment in General

AICPA Educational Competency Assessment for Accounting Students

Assessment Issues: Measurement and No-Significant-Differences

Dangers of Self Assessment

The Criterion Problem 

Success Stories in Education Technology

Research Versus Teaching
"Favorite Teacher" Versus "Learned the Most"

Grade Inflation Versus Teaching Evaluations

Student Evaluations and Learning Styles   

Assessment Takes Center Stage in Online Learning:  The Saga of Western Governors University

Measures of Quality in Internet-Based Distance Learning

Number Watch: How to Lie With Statistics

Drop Out Problems   

On the Dark Side 

Accreditation Issues

Software for Online Examinations and Quizzes

Onsite Versus Online Education (including controls for online examinations and assignments)

The term "electroThenic portfolio," or "ePortfolio," is on everyone's lips.  What does this mean? 

Research Versus Teaching
"Favorite Teacher" Versus "Learned the Most"

Grade Inflation Versus Course Evaluations  

Work Experience Substitutes for College Credits

Certification Examinations  

Should attendance guarantee passing?

Peer Review Controversies in Academic Journals

Differences between "popular teacher"
versus "master teacher"
versus "mastery learning"
versus "master educator."

Degrees Versus Piecemeal Distance (Online) Education

Full Disclosure to Consumers of Higher Education (including assessment of colleges and the Spellings Commission Report) --- http://www.trinity.edu/rjensen/HigherEdControversies.htm#FullDisclosure
Also see http://www.trinity.edu/rjensen/HigherEdControversies.htm#Bok

Publish Exams Online ---
http://www.examprofessor.com/main/index.cfm

Controversies in Higher Education ---
http://www.trinity.edu/rjensen/HigherEdControversies.htm

Bob Jensen's threads on cheating and plagiarism ---
http://www.trinity.edu/rjensen/plagiarism.htm

Effort Reporting Technology for Higher Education ---
http://www.huronconsultinggroup.com/uploadedFiles/ECRT_email.pdf

Some Thoughts on Competency-Based Training and Education ---
http://www.trinity.edu/rjensen/competency.htm

You can download (for free) hours of MP3 audio and the PowerPoint presentation slides from several of the best education technology workshops that I ever organized. --- http://www.cs.trinity.edu/~rjensen/002cpe/02start.htm 

Asynchronous Learning Advantages and Disadvantages ---
http://www.trinity.edu/rjensen/255wp.htm

Dark Sides of Education Technologies ---
http://www.trinity.edu/rjensen/000aaa/0000start.htm

For threaded audio and email messages from early pioneers in distance education, go http://www.trinity.edu/rjensen/ideasmes.htm 

Full Disclosure to Consumers of Higher Education at 
http://www.trinity.edu/rjensen/HigherEdControversies.htm#FullDisclosure 
From PhD Comics: Helpers for Filling Out Teaching Evaluations --- 
http://www.phdcomics.com/comics.php?f=847  
"How Do People Learn," Sloan-C Review, February 2004 --- 
http://www.aln.org/publications/view/v3n2/coverv3n2.htm 

Like some of the other well known cognitive and affective taxonomies, the Kolb figure illustrates a range of interrelated learning activities and styles beneficial to novices and experts. Designed to emphasize reflection on learners’ experiences, and progressive conceptualization and active experimentation, this kind of environment is congruent with the aim of lifelong learning. Randy Garrison points out that:

From a content perspective, the key is not to inundate students with information. The first responsibility of the teacher or content expert is to identify the central idea and have students reflect upon and share their conceptions. Students need to be hooked on a big idea if learners are to be motivated to be reflective and self-directed in constructing meaning. Inundating learners with information is discouraging and is not consistent with higher order learning . . . Inappropriate assessment and excessive information will seriously undermine reflection and the effectiveness of asynchronous learning. 

Reflection on a big question is amplified when it enters collaborative inquiry, as multiple styles and approaches interact to respond to the challenge and create solutions. In How People Learn: Brain, Mind, Experience, and School, John Bransford and colleagues describe a legacy cycle for collaborative inquiry, depicted in a figure by Vanderbilt University researchers  (see image, lower left).

Continued in the article


December 12, 2003 message from Tracey Sutherland [return@aaahq.org

THE EDUCATIONAL COMPETENCY ASSESSMENT (ECA) WEB SITE IS LIVE! http://www.aicpa-eca.org 

The AICPA provides this resource to help educators integrate the skills-based competencies needed by entry-level accounting professionals. These competencies, defined within the AICPA Core Competency Framework Project, have been derived from academic and professional competency models and have been widely endorsed within the academic community. Created by educators for educators, the evaluation and educational strategies resources on this site are offered for your use and adaptation.

The ECA site contains a LIBRARY that, in addition to the Core Competency Database and Education Strategies, provides information and guidance on Evaluating Competency Coverage and Assessing Student Performance.

To assist you as you assess student performance and evaluate competency coverage in your courses and programs, the ECA ORGANIZERS guide you through the process of gathering, compiling and analyzing evidence and data so that you may document your activities and progress in addressing the AICPA Core Competencies.

The ECA site can be accessed through the Educator's page of aicpa.org, or at the URL listed above.

 

The Downfall of Lecturing

My Hero at the American Accounting Association Meetings in San Antonio on August 13, 2002 --- Amy Dunbar

How to students evaluate Amy Dunbar's online tax courses?

This link is a pdf doc that I will be presenting at a CPE session with Bob Jensen, Nancy Keeshan, and Dennis Beresford at the AAA on Tuesday. I updated the paper I wrote that summarized the summer 2001 online course. You might be interested in the exhibits, particularly Exhibit II, which summarizes student responses to the learning tools over the two summers. This summer I used two new learning tools: synchronous classes (I used Placeware) and RealPresenter videos. My read of the synchronous class comments is that most students liked having synchronous classes, but not often and not long ones! 8 of the 57 responding students thought the classes were a waste of time. 19 of my students, however, didn't like the RealPresenter videos, partly due to technology problems. Those who did like them, however, really liked them and many wanted more of them. I think that as students get faster access to the Internet, the videos will be more useful.

http://www.sba.uconn.edu/users/adunbar/genesis_of_an_online_course_2002.pdf 

Amy Dunbar 
UConn


Random Thoughts (about learning from a retired professor of engineering) ---  http://www4.ncsu.edu/unity/lockers/users/f/felder/public/Columns.html

Dr. Felder's column in Chemical Engineering Education

Focus is heavily upon active learning and group learning.

Bob Jensen's threads on learning are in the following links:

http://www.trinity.edu/rjensen/assess.htm

http://www.trinity.edu/rjensen/255wp.htm

http://www.trinity.edu/rjensen/265wp.htm


March 3, 2005 message from Carolyn Kotlas [kotlas@email.unc.edu

WHAT LEADS TO ACHIEVING SUCCESS IN DISTANCE EDUCATION?

"Achieving Success in Internet-Supported Learning in Higher Education," released February 1, 2005, reports on the study of distance education conducted by the Alliance for Higher Education Competitiveness (A-HEC). A-HEC surveyed 21 colleges and universities to "uncover best practices in achieving success with the use of the Internet in higher education." Some of the questions asked by the study included:

"Why do institutions move online? Are there particular conditions under which e-Learning will be successful?"

"What is the role of leadership and by whom? What level of investment or commitment is necessary for success?"

"How do institutions evaluate and measure success?"

"What are the most important and successful factors for student support and faculty support?"

"Where do institutions get stuck? What are the key challenges?"

The complete report is available online, at no cost, at http://www.a-hec.org/e-learning_study.html.

The "core focus" of the nonprofit Alliance for Higher Education Competitiveness (A-HEC) "is on communicating how higher education leaders are creating positive change by crystallizing their mission, offering more effective academic programs, defining their role in society, and putting in place balanced accountability measures." For more information, go to http://www.a-hec.org/ . Individual membership in A-HEC is free.


Hi Yvonne,

For what it is worth, my advice to new faculty is at http://www.trinity.edu/rjensen/000aaa/newfaculty.htm 

One thing to remember is that the employers of our students (especially the public accounting firms) are very unhappy with our lecture/drill pedagogy at the introductory and intermediate levels. They believe that such pedagogy turns away top students, especially creative and conceptualizing students. Employers  believe that lecture/drill pedagogy attracts savant-like memorizers who can recite their lessons book and verse but have few creative talents and poor prospects for becoming leaders. The large accounting firms believed this so strongly that they donated several million dollars to the American Accounting Association for the purpose of motivating new pedagogy experimentation. This led to the Accounting Change Commission (AECC) and the mixed-outcome experiments that followed. See http://accounting.rutgers.edu/raw/aaa/facdev/aecc.htm 

The easiest pedagogy for faculty is lecturing, and it is appealing to busy faculty who do not have time for students outside the classroom. When lecturing to large classes it is even easier because you don't have to get to know the students and have a great excuse for using multiple choice examinations and graduate student teaching assistants. I always remember an economics professor at Michigan State University who said that when teaching basic economics it did not matter whether he had a live class of 300 students or a televised class of 3,000 students. His full-time teaching load was three hours per week in front of a TV camera. He was a very good lecturer and truly loved his three-hour per week job!

Lecturing appeals to faculty because it often leads to the highest teaching evaluations.  Students love faculty who spoon feed and make learning seem easy.  It's much easier when mom or dad spoon the pudding out of the jar than when you have to hold your own spoon and/or find your own jar.

An opposite but very effective pedagogy is the AECC (University of Virginia) BAM Pedagogy that entails live classrooms with no lectures. BAM instructors think it is more important for students to learn on their own instead of sitting through spoon-fed learning lectures. I think it takes a special kind of teacher to pull off the astoundingly successful BAM pedagogy. Interestingly, it is often some of our best lecturers who decided to stop lecturing because they experimented with the BAM and found it to be far more effective for long-term memory. The top BAM enthusiasts are Tony Catanach at Villanova University and David Croll at the University of Virginia. Note, however, that most BAM applications have been at the intermediate accounting level. I have my doubts (and I think BAM instructors will agree) that BAM will probably fail at the introductory level. You can read about the BAM pedagogy at http://www.trinity.edu/rjensen/265wp.htm 

At the introductory level we have what I like to call the Pincus (User Approach) Pedagogy. Karen Pincus is now at the University of Arkansas, but at the time that her first learning experiments were conducted, she taught basic accounting at the University of Southern California. The Pincus Pedagogy is a little like both the BAM and the case method pedagogies. However, instead of having prepared learning cases, the Pincus Pedagogy sends students to on-site field visitations where they observe on-site operations and are then assigned tasks to creatively suggest ways of improving existing accounting, internal control, and information systems. Like the BAM, the Pincus Pedagogy avoids lecturing and classroom drill. Therein lies the controversy. Students and faculty in subsequent courses often complain that the Pincus Pedagogy students do not know the fundamental prerequisites of basic accounting needed for intermediate and advanced-level accounting courses.  Two possible links of interest on the controversial Pincus Pedagogy are as follows:  

Where the Pincus Pedagogy and the BAM Pedagogy differ lies in subject matter itself and stress on creativity. The BAM focuses on traditional subject matter that is found in such textbooks as intermediate accounting textbooks. The BAM Pedagogy simply requires that students learn any way they want to learn on their own since students remember best what they learned by themselves. The Pincus Pedagogy does not focus on much of the debit and credit "rules" found in most traditional textbooks. Students are required to be more creative at the expense of memorizing the "rules."

The Pincus Pedagogy is motivated by the belief that traditional lecturing/drill pedagogy at the basic accounting and tax levels discourages the best and more-creative students to pursue careers in the accountancy profession. The BAM pedagogy is motivated more by the belief that lecturing is a poor pedagogy for long-term memory of technical details. What is interesting is that the leading proponents of getting away from the lecture/drill pedagogy (i.e., Karen Pincus and Anthony Catenach) were previously two of the very best lecturers in accountancy. If you have ever heard either of them lecture, I think you would agree that you wish all your lecturers had been only half as good. I am certain that both of these exceptional teachers would agree that lecturing is easier than any other alternatives. However, they do not feel that lecturing is the best alternative for top students.

Between lecturing and the BAM Pedagogy, we have case method teaching. Case method teaching is a little like lecturing and a little like the BAM with some instructors providing answers in case wrap ups versus some instructors forcing students to provide all the answers. Master case teachers at Harvard University seldom provide answers even in case wrap ups, and often the cases do not have any known answer-book-type solutions. The best Harvard cases have alternative solutions with success being based upon discovering and defending an alternative solution. Students sometimes interactively discover solutions that the case writers never envisioned. I generally find case teaching difficult at the undergraduate level if students do not yet have the tools and maturity to contribute to case discussions. Interestingly, it may be somewhat easier to use the BAM at the undergraduate level than Harvard-type cases. The reason is that BAM instructors are often dealing with more rule-based subject matter such as intermediate accounting or tax rather than conceptual subject matter such as strategic decision making, business valuation, and financial risk analysis.

The hardest pedagogy today is probably a Socratic pedagogy online with instant messaging communications where an instructor who's on call about 60 hours per week from his or her home. The online instructor monitors the chats and team communications between students in the course at most any time of day or night. Amy Dunbar can tell you about this tedious pedagogy since she's using it for tax courses and will be providing a workshop that tells about how to do it and how not to do it. The next scheduled workshop precedes the AAA Annual Meetings on August 1, 2003 in Hawaii. You can also hear Dr. Dunbar and view her PowerPoint show from a previous workshop at http://www.cs.trinity.edu/~rjensen/002cpe/02start.htm#2002 

In conclusion, always remember that there is no optimal pedagogy in all circumstances. All learning is circumstantial based upon such key ingredients as student maturity, student motivation, instructor talent, instructor dedication, instructor time, library resources, technology resources, and many other factors that come to bear at each moment in time. And do keep in mind that how you teach may determine what students you keep as majors and what you turn away. 

I tend to agree with the accountancy firms that contend that traditional lecturing probably turns away many of the top students who might otherwise major in accountancy. 

At the same time, I tend to agree with students who contend that they took accounting courses to learn accounting rather than economics, computer engineering, and behavioral science.

Bob Jensen

-----Original Message----- 
From: Lou&Bonnie [mailto:gyp1@EARTHLINK.NET]  
Sent: Thursday, January 16, 2003 5:03 PM

I am a beginning accounting instructor (part-time) at a local community college. I am applying for a full-time faculty position, but am having trouble with a question. Methodology in accounting--what works best for a diversified group of individuals. Some students work with accounting, but on a computer and have no understanding of what the information they are entering really means to some individuals who have no accounting experience whatsoever. What is the best methodology to use, lecture, overhead, classroom participation? I am not sure and I would like your feedback. Thank you in advance for your help. 

Yvonne


January 20, 2003 reply from Thomas C. Omer [omer@UIC.EDU

Don’t forget about Project Discovery going on at the University of Illinois Champaign-Urbana

Thomas C. Omer Associate Professor 
Department of Accounting University of Illinois At Chicago 
The Art of Discovery: Finding the forest in spite of the trees.

Thanks for reminding me Tom. A good link for Project Discovery is at http://accounting.rutgers.edu/raw/aaa/facdev/aeccuind.htm 


January 17, 2003 reply from David R. Fordham [fordhadr@JMU.EDU

I'll add an endorsement to Bob's advice to new teachers. His page should be required reading for Ph.D.s.

And I'll add one more tidbit.

Most educators overlook the distinction between "lectures" and "demonstrations".

There is probably no need for any true "lecture" in the field of accounting at the college level, even though it is still the dominant paradigm at most institutions.

However, there is still a great need for "live demonstrations", **especially** at the introductory level.

Accounting is a complex process. Introductory students in ANY field learn more about complex processes from demonstrations than probably any other method.

Then, they move on and learn more from "practicing" the process, once they've learned the steps and concepts of the process. And for intermediate and advanced students, practice is the best place to "discover" the nuances and details.

While "Discovery" is probably the best learning method of all, it is frequently very difficult to "discover" a complex process correctly from its beginning, on your own. Thus, a quick demonstration can often be of immense value at the introductory level. It's an efficient way of communicating sequences, relationships, and dynamics, all of which are present in accounting processes.

Bottom line: You can (and should) probably eliminate "lectures" from your classes. You should not entirely eliminate "demonstrations" from your classes.

Unfortunately, most education-improvement reform literature does not draw the distinction: anytime the teacher is doing the talking in front of a class, using blackboard and chalk or PowerPoint, they label it "lecture" and suggest you don't do it! This is, in my view, oversimplification, and very bad advice.

Your teaching will change a whole lot (for the better!) once you realize that students only need demonstrations of processes. You will eliminate a lot of material you used to "lecture" on. This will make room for all kinds of other things that will improve your teaching over the old "lecture" method: discussions, Socratic dialogs, cases and dilemmas, even some entertainment here and there.

Plus, the "lectures" you retain will change character. Take your cue from Mr. Wizard or Bill Nye the Science Guy, who appear to "lecture" (it's about the only thing you can do in front of a camera!), but whose entire program is pretty much devoted to demonstration. Good demonstrations do more than just demonstrate, they also motivate! Most lectures don't!

Another two pennies from the verbose one...

David R. Fordham 
PBGH Faculty Fellow 
James Madison University

January 16, 2003 message from Peter French [pjfrench@CELESTIAL.COM.AU

I found this source http://www.thomson.com/swcp/gita.html  and also Duncan Williamson has some very good basic material on his sites http://duncanwil.co.uk/index.htm  ; http://www.duncanwil.co.uk/objacc.html  ;

Don't forget the world lecture hall at http://www.utexas.edu/world/lecture/  ;

This reminds me of how I learned ... the 'real learning' in the workplace...

I remember my first true life consolidation - 130 companies in 1967. We filled a wall with butchers paper and had 'callers', 'writers' and 'adders' who called out the information to others who wrote out the entries and others who did the adding. I was 25 and quite scared. The Finance Director knew this and told me [1] to stick with 'T' accounts to be sure I was making the right entry - just stick the ones you are sure in and don't even think about the other entry - it must 'balance' it out; [2] just because we are dealing with 130 companies and several hundreds of millions of dollars don't lose sight of the fact that really it is no different from the corner store. I have never forgotten the simplistic approach. He said - if the numbers scare you, decimalise them to 100,000's in your mind - it helps ... and it did. He often used to say the Dr/Cr entries out aloud

I entered teaching aged 48 after having been in industry and practice for nearly 30 years. Whether i am teaching introductory accounting, partnership formation/dissolution, consolidations, asset revaluation, tax affect accounting, I simply write up the same basic entries on the white board each session - I never use an overhead for this, I always write it up and say it out aloud, and most copy/follow me - and then recap and get on with the lesson. I always take time out to 'flow chart' what we are doing so that they never loose sight of the real picture ... this simple system works, and have never let my students down.

There have been several movements away form rote learning in all levels of education - often with disastrous consequences. It has its place and I am very proud to rely on it. This works and when it isn't broken, I am not about to try to fix it.

Good luck - it is the greatest responsibility in the world, and gives the greatest job satisfaction. It is worth every hour and every grey hair. To realise that you have enabled someone to change their lives, made a dream come true, eclipses every successful takeover battle or tax fight that I won i have ever had.

Good luck - may it be to you what is has been to me.

Peter French

January 17, 2003 reply from Michael O'Neil, CPA Adjunct Prof. Weber [Marine8105@AOL.COM

I am currently teaching high school students, some of whom will hopefully go on to college. Parents expect you to teach the children, which really amounts to lecturing, or going over the text material. When you do this they do not read the textbook, nor do they know how to use the textbook to answer homework questions. If you don't lecture then the parents will blame you for "not" teaching their children the material.

I agree that discovery is the best type of learning, and the most fun. I teach geometry and accounting/consumer finance. Geometry leans itself to discovery, but to do so you need certain materials. At our level (high school) we are also dealing several other issues you don't have at the college level. In my accounting classes I teach the debit/credit, etc. and then have them do a lot of work using two different accounting programs. When they make errors I have them discover the error and correct it. They probably know very little about posting, and the formatting of financial statements although we covered it. Before we used the programs we did a lot of pencil work.

Even when I taught accounting at the college and junior college level I found students were reluctant to, and not well prepared to, use their textbooks. Nor were they inclined to DO their homework.

I am sure that many of you have noticed a drop off in quality of students in the last years. I wish I could tell you that I see that it will change, but I do not see any effort in that direction. Education reminds me of a hot air balloon being piloted by people who lease the balloon and have no idea how to land it. They are just flying around enjoying the view. If we think in terms of bankruptcy education is ready for Chapter 11.

Mike ONeil

January 17, 2003 reply from Chuck Pier [texcap@HOTMAIL.COM

While not in accounting, I would like to share some information on my wife's experience with online education. She has a background (10 years) as a public school teacher and decided to get her graduate degree in library science. Since I was about to finish my doctoral studies and we knew we would be moving she wanted to find a program that would allow her to move away and not lose too many hours in the transfer process. What she found was the online program at the University of North Texas (UNT) in Denton. Through this program she will be able to complete a 36 hour American Library Association accredited Master's degree in Library Science and only spend a total of 9 days on campus. The 9 days are split into a one day session and 2 four day sessions, which can be combined into 1 five and 1 four day session. Other than these 9 days the entire course is conducted over the internet. The vast majority is asynchronous, but there are some parts conducted in a synchronous manner.

She has completed about 3/4 of the program and is currently in Denton for her last on campus session. While I often worry about the quality of online programs, after seeing how much work and time she is required to put in, I don't think I should worry as much. I can honestly say that I feel she is getting a better, more thorough education than most traditional programs. I know at a minimum she has covered a lot more material.

All in all her experience has been positive and this program fit her needs. I think the MLS program at UNT has been very successful to date and appears to be growing quite rapidly. It may serve as a role model for programs in other areas.

Chuck Pier

Charles A. Pier 
Assistant Professor Department of Accounting 
Walker College of Business 
Appalachian State University 
Boone, NC 28608 email:
pierca@appstate.edu  828-262-6189

Concept Knowledge

June 18, 2006 message from Bob Kennelly [bob_kennelly@YAHOO.COM]

I am a data analyst with the Federal Government, recently assigned a project to integrate our accounting codes with XBRL accounting codes, primarily for the quarterly reporting of banking financial information.
 
For the past few weeks, i've been searching the WEB looking for educational materials that will help us map, rollup and orr olldown the data that we recieve from the banks that we regulate, to the more generic XBRL accounting codes.
 
Basically, i'm hoping to provide my team members with the tools to help them make more informed decisions on how to classify accounting codes and capture their findings for further review and discussion.
 
To my suprise there isn't the wealth of accounting information that i thought there would be on the WEB, but i am very relieved to have found Bob Jensen's site and in particular an article which refers to the kind of information gathering
approaches that i'm hoping to discover!
 
Here is the brief on that article:
"Using Hypertext in Instructional Material:  Helping Students Link Accounting Concept Knowledge to Case Applications," by Dickie Crandall and Fred Phillips, Issues in Accounting Education, May 2002, pp. 163-184
---
http://accounting.rutgers.edu/raw/aaa/pubs.htm
 
We studied whether instructional material that connects accounting concept discussions with sample case applications through hypertext links would enable students to better understand how concepts are to be applied to practical case situations.
 
Results from a laboratory experiment indicated that students who learned from such hypertext-enriched instructional material were better able to apply concepts to new accounting cases than those who learned from instructional material that contained identical content but lacked the concept-case application hyperlinks. 
 
Results also indicated that the learning benefits of concept-case application hyperlinks in instructional material were greater when the hyperlinks were self-generated by the students rather than inherited from instructors, but only when students had generated appropriate links. 
 
Could anyone be so kind as to please suggest other references, articles or tools that will help us better understand and classify the broad range of accounting terminologies and methodologies please?
 
For more information on XBRL, here is the XBRL link: http://xbrl.org
 
Thanks very much!
Bob Kennelly
OFHEO

June 19, 2006 reply from Bob Jensen

Hi Bob,

You may find the following documents of related interest:

"Internet Financial Reporting: The Effects of Hyperlinks and Irrelevant Information on Investor Judgments," by Andrea S. Kelton (Ph.D. Dissertation at the University of Tennessee) --- http://www.mgt.ncsu.edu/pdfs/accounting/kelton_dissertation_1-19-06.pdf

Extendible Adaptive Hypermedia Courseware: Integrating Different Courses and Web Material
Lecture Notes in Computer Science,  Publisher: Springer Berlin / Heidelberg ISSN: 0302-9743 Subject: Computer Science Volume 1892 / 2000 Title: Adaptive Hypermedia and Adaptive Web-Based Systems: International Conference, AH 2000, Trento, Italy, August 2000. Proceedings Editors: P. Brusilovsky, O. Stock, C. Strapparava (Eds.) --- Click Here

"Concept, Knowledge, and Thought," G. C. Oden, Annual Review of Psychology Vol. 38: 203-227 (Volume publication date January 1987) --- Click Here

"A Framework for Organization and Representation of Concept Knowledge in Autonomous Agents," by Paul Davidsson,  Department of Computer Science, University of Lund, Box 118, S–221 00 Lund, Sweden email: Paul.Davidsson@dna.lth.se

"Active concept learning for image retrieval in dynamic databases," by Dong, A. Bhanu, B. Center for Res. in Intelligent Syst., California Univ., Riverside, CA, USA; This paper appears in: Computer Vision, 2003. Proceedings. Ninth IEEE International Conference on Publication Date: 13-16 Oct. 2003 On page(s): 90- 95 vol.1 ISSN: ISBN: 0-7695-1950-4 --- Click Here

"Types and qualities of knowledge," by Ton de Jong, ​‌Monica G.M. Ferguson-Hessler, Educational Psychologist 1996, Vol. 31, No. 2, Pages 105-113 --- Click Here

Also note http://www.trinity.edu/rjensen/assess.htm#DownfallOfLecturing

Hope this helps
Bob Jensen


Assessing-to-Learn Physics: Project Website --- http://a2l.physics.umass.edu/

Bob Jensen's threads on science and medicine tutorials are at http://www.trinity.edu/rjensen/Bookbob2.htm#Science


Onsite Versus Online Differences for Faculty

Soaring Popularity of E-Learning Among Students But Not Faculty
How many U.S. students took at least on online course from a legitimate college in Fall 2005?

More students are taking online college courses than ever before, yet the majority of faculty still aren’t warming up to the concept of e-learning, according to a national survey from the country’s largest association of organizations and institutions focused on online education . . . ‘We didn’t become faculty to sit in front of a computer screen,’
Elia Powers, "Growing Popularity of E-Learning, Inside Higher Ed, November 10, 2006 --- http://www.insidehighered.com/news/2006/11/10/online

More students are taking online college courses than ever before, yet the majority of faculty still aren’t warming up to the concept of e-learning, according to a national survey from the country’s largest association of organizations and institutions focused on online education.

Roughly 3.2 million students took at least one online course from a degree-granting institution during the fall 2005 term, the Sloan Consortium said. That’s double the number who reported doing so in 2002, the first year the group collected data, and more than 800,000 above the 2004 total. While the number of online course participants has increased each year, the rate of growth slowed from 2003 to 2004.

The report, a joint partnership between the group and the College Board, defines online courses as those in which 80 percent of the content is delivered via the Internet.

The Sloan Survey of Online Learning, “Making the Grade: Online Education in the United States, 2006,” shows that 62 percent of chief academic officers say that the learning outcomes in online education are now “as good as or superior to face-to-face instruction,” and nearly 6 in 10 agree that e-learning is “critical to the long-term strategy of their institution.” Both numbers are up from a year ago.

Researchers at the Sloan Consortium, which is administered through Babson College and Franklin W. Olin College of Engineering, received responses from officials at more than 2,200 colleges and universities across the country. (The report makes few references to for-profit colleges, a force in the online market, in part because of a lack of survey responses from those institutions.)

Much of the report is hardly surprising. The bulk of online students are adult or “nontraditional” learners, and more than 70 percent of those surveyed said online education reaches students not served by face-to-face programs.

What stands out is the number of faculty who still don’t see e-learning as a valuable tool. Only about one in four academic leaders said that their faculty members “accept the value and legitimacy of online education,” the survey shows. That number has remained steady throughout the four surveys. Private nonprofit colleges were the least accepting — about one in five faculty members reported seeing value in the programs.

Elaine Allen, co-author of the report and a Babson associate professor of statistics and entrepreneurship, said those numbers are striking.

“As a faculty member, I read that response as, ‘We didn’t become faculty to sit in front of a computer screen,’ ” Allen said. “It’s a very hard adjustment. We sat in lectures for an hour when we were students, but there’s a paradigm shift in how people learn.”

Barbara Macaulay, chief academic officer at UMass Online, which offers programs through the University of Massachusetts, said nearly all faculty members teaching the online classes there also teach face-to-face courses, enabling them to see where an online class could fill in the gap (for instance, serving a student who is hesitant to speak up in class).

She said she isn’t surprised to see data illustrating the growing popularity of online courses with students, because her program has seen rapid growth in the last year. Roughly 24,000 students are enrolled in online degree and certificate courses through the university this fall — a 23 percent increase from a year ago, she said.

“Undergraduates see it as a way to complete their degrees — it gives them more flexibility,” Macaulay said.

The Sloan report shows that about 80 percent of students taking online courses are at the undergraduate level. About half are taking online courses through community colleges and 13 percent through doctoral and research universities, according to the survey.

Nearly all institutions with total enrollments exceeding 15,000 students have some online offerings, and about two-thirds of them have fully online programs, compared with about one in six at the smallest institutions (those with 1,500 students or fewer), the report notes. Allen said private nonprofit colleges are often set in enrollment totals and not looking to expand into the online market.

The report indicates that two-year colleges are particularly willing to be involved in online learning.

“Our institutions tend to embrace changes a little more readily and try different pedagogical styles,” said Kent Phillippe, a senior research associate at the American Association of Community Colleges. The report cites a few barriers to what it calls the “widespread adoption of online learning,” chief among them the concern among college officials that some of their students lack the discipline to succeed in an online setting. Nearly two-thirds of survey respondents defined that as a barrier.

Allen, the report’s co-author, said she thinks that issue arises mostly in classes in which work can be turned in at any time and lectures can be accessed at all hours. “If you are holding class in real time, there tends to be less attrition,” she said. The report doesn’t differentiate between the live and non-live online courses, but Allen said she plans to include that in next year’s edition.

Few survey respondents said acceptance of online degrees by potential employers was a critical barrier — although liberal arts college officials were more apt to see it as an issue.

November 10, 2006 reply from John Brozovsky [jbrozovs@vt.edu]

Hi Bob:

One reason why might be what I have seen. The in residence accounting students that I talk with take online classes here because they are EASY and do not take much work. This would be very popular with students but not generally so with faculty.

John

November 10, 2006 reply from Bob Jensen

Hi John,

Then there is a quality control problem whereever this is a fact. It would be a travesty if any respected college had two or more categories of academic standards or faculty assignments.

Variations in academic standards have long been a problem between part-time versus full-time faculty, although grade inflation can be higher or lower among part-time faculty. In one instance, it’s the tenure-track faculty who give higher grades because they're often more worried about student evaluations. At the opposite extreme it is part-time faculty who give higher grades for many reasons that we can think of if we think about it.

One thing that I'm dead certain about is that highly motivated students tend to do better in online courses ceteris paribus. Reasons are mainly that time is used more efficiently in getting to class (no wasted time driving or walking to class), less wasted time getting teammates together on team projects, and fewer reasons for missing class.

Also online alternatives offer some key advantages for certain types of handicapped students --- http://www.trinity.edu/rjensen/000aaa/thetools.htm 

My opinions on learning advantages of E-Learning were heavily influenced by the most extensive and respected study of online versus onsite learning experiments in the SCALE experiments using full-time resident students at the University of Illinois --- http://www.trinity.edu/rjensen/255wp.htm#Illinois 

In the SCALE experiments cutting across 30 disciplines, it was generally found that motivated students learned better online then their onsite counterparts having the same instructors. However, there was no significant impact on students who got low grades in online versus onsite treatment groups.

I think the main problem with faculty is that online teaching tends to burn out instructors more frequently than onsite instructors. This was also evident in the SCALE experiments. When done correctly, online courses are more communication intent between instructors and faculty. Also, online learning takes more preparation time if it is done correctly. 

My hero for online learning is still Amy Dunbar who maintains high standards for everything:

http://www.cs.trinity.edu/~rjensen/002cpe/02start.htm

http://www.trinity.edu/rjensen/book01q4.htm#Dunbar

Bob Jensen

November 10, 2006 reply from John Brozovsky [jbrozovs@vt.edu]

Hi Bob:

Also why many times it is not done 'right'. Not done right they do not get the same education. Students generally do not complain about getting 'less for their money'. Since we do not do online classes in department the ones the students are taking are the university required general education and our students in particular are not unhappy with being shortchanged in that area as they frequently would have preferred none anyway.

John

 

Bob Jensen's threads on open sharing and education technology are at http://www.trinity.edu/rjensen/000aaa/0000start.htm

Bob Jensen's threads on online training and education alternatives are at http://www.trinity.edu/rjensen/crossborder.htm

Motivations for Distance Learning --- http://www.trinity.edu/rjensen/000aaa/updateee.htm#Motivations

Bob Jensen's threads on the dark side of online learning and teaching are at http://www.trinity.edu/rjensen/000aaa/theworry.htm


Question
Why should teaching a course online take twice as much time as teaching it onsite?

Answer
Introduction to Economics:  Experiences of teaching this course online versus onsite

With a growing number of courses offered online and degrees offered through the Internet, there is a considerable interest in online education, particularly as it relates to the quality of online instruction. The major concerns are centering on the following questions: What will be the new role for instructors in online education? How will students' learning outcomes be assured and improved in online learning environment? How will effective communication and interaction be established with students in the absence of face-to-face instruction? How will instructors motivate students to learn in the online learning environment? This paper will examine new challenges and barriers for online instructors, highlight major themes prevalent in the literature related to “quality control or assurance” in online education, and provide practical strategies for instructors to design and deliver effective online instruction. Recommendations will be made on how to prepare instructors for quality online instruction.
Yi Yang and Linda F. Cornelious, "Preparing Instructors for Quality Online Instruction, Working Paper --- http://www.westga.edu/%7Edistance/ojdla/spring81/yang81.htm

Jensen Comment:  The bottom line is that teaching the course online took twice as much time because "largely from increased student contact and individualized instruction and not from the use of technology per se."

Online teaching is more likely to result in instructor burnout.  These and other issues are discussed in my "dark side" paper at http://www.trinity.edu/rjensen/000aaa/theworry.htm 

April 1, 2005 message from Carolyn Kotlas [kotlas@email.unc.edu]

COMPUTERS IN THE CLASSROOM AND OPEN BOOK EXAMS

In "PCs in the Classroom & Open Book Exams" (UBIQUITY, vol. 6, issue 9, March 15-22, 2005), Evan Golub asks and supplies some answers to questions regarding open-book/open-note exams. When classroom computer use is allowed and encouraged, how can instructors secure the open-book exam environment? How can cheating be minimized when students are allowed Internet access during open-book exams? Golub's suggested solutions are available online at
http://www.acm.org/ubiquity/views/v6i9_golub.html

Ubiquity is a free, Web-based publication of the Association for Computing Machinery (ACM), "dedicated to fostering critical analysis and in-depth commentary on issues relating to the nature, constitution, structure, science, engineering, technology, practices, and paradigms of the IT profession." For more information, contact: Ubiquity, email: ubiquity@acm.org ; Web: http://www.acm.org/ubiquity/ 

For more information on the ACM, contact: ACM, One Astor Plaza, 1515 Broadway, New York, NY 10036, USA; tel: 800-342-6626 or 212-626-0500; Web: http://www.acm.org/


NEW EDUCAUSE E-BOOK ON THE NET GENERATION

EDUCATING THE NET GENERATION, a new EDUCAUSE e-book of essays edited by Diana G. Oblinger and James L. Oblinger, "explores the Net Gen and the implications for institutions in areas such as teaching, service, learning space design, faculty development, and curriculum." Essays include: "Technology and Learning Expectations of the Net Generation;" "Using Technology as a Learning Tool, Not Just the Cool New Thing;" "Curricula Designed to Meet 21st-Century Expectations;" "Faculty Development for the Net Generation;" and "Net Generation Students and Libraries." The entire book is available online at no cost at http://www.educause.edu/educatingthenetgen/ .

EDUCAUSE is a nonprofit association whose mission is to advance higher education by promoting the intelligent use of information technology. For more information, contact: Educause, 4772 Walnut Street, Suite 206, Boulder, CO 80301-2538 USA; tel: 303-449-4430; fax: 303-440-0461; email: info@educause.edu;  Web: http://www.educause.edu/

See also:

GROWING UP DIGITAL: THE RISE OF THE NET GENERATION by Don Tapscott McGraw-Hill, 1999; ISBN: 0-07-063361-4 http://www.growingupdigital.com/


EFFECTIVE E-LEARNING DESIGN

"The unpredictability of the student context and the mediated relationship with the student require careful attention by the educational designer to details which might otherwise be managed by the teacher at the time of instruction." In "Elements of Effective e-Learning Design" (INTERNATIONAL REVIEW OF RESEARCH IN OPEN AND DISTANCE LEARNING, March 2005) Andrew R. Brown and Bradley D. Voltz cover six elements of effective design that can help create effective e-learning delivery. Drawing upon examples from The Le@rning Federation, an initiative of state and federal governments of Australia and New Zealand, they discuss lesson planning, instructional design, creative writing, and software specification. The paper is available online at http://www.irrodl.org/content/v6.1/brown_voltz.html 

International Review of Research in Open and Distance Learning (IRRODL) [ISSN 1492-3831] is a free, refereed ejournal published by Athabasca University - Canada's Open University. For more information, contact Paula Smith, IRRODL Managing Editor; tel: 780-675-6810; fax: 780-675-672; email: irrodl@athabascau.ca ; Web: http://www.irrodl.org/

The Le@rning Federation (TLF) is an "initiative designed to create online curriculum materials and the necessary infrastructure to ensure that teachers and students in Australia and New Zealand can use these materials to widen and enhance their learning experiences in the classroom." For more information, see http://www.thelearningfederation.edu.au/


RECOMMENDED READING

"Recommended Reading" lists items that have been recommended to me or that Infobits readers have found particularly interesting and/or useful, including books, articles, and websites published by Infobits subscribers. Send your recommendations to carolyn_kotlas@unc.ed u for possible inclusion in this column.

Author Clark Aldrich recommends his new book:

LEARNING BY DOING: A COMPREHENSIVE GUIDE TO SIMULATIONS, COMPUTER GAMES, AND PEDAGOGY IN E-LEARNING AND OTHER EDUCATIONAL EXPERIENCES Wiley, April 2005 ISBN: 0-7879-7735-7 hardcover $60.00 (US)

Description from Wiley website:

"Designed for learning professionals and drawing on both game creators and instructional designers, Learning by Doing explains how to select, research, build, sell, deploy, and measure the right type of educational simulation for the right situation. It covers simple approaches that use basic or no technology through projects on the scale of computer games and flight simulators. The book role models content as well, written accessibly with humor, precision, interactivity, and lots of pictures. Many will also find it a useful tool to improve communication between themselves and their customers, employees, sponsors, and colleagues."

The table of contents and some excerpts are available at http://www.wiley.com/WileyCDA/WileyTitle/productCd-0787977357.html

Aldrich is also author of SIMULATIONS AND THE FUTURE OF LEARNING: AN INNOVATIVE (AND PERHAPS REVOLUTIONARY) APPROACH TO E-LEARNING. See http://www.wiley.com/WileyCDA/WileyTitle/productCd-0787969621.html  for more information or to request an evaluation copy of this title.

Also see
Looking at Learning….Again, Part 2
--- http://www.learner.org/resources/series114.html 

Bob Jensen's documents on education technology are at http://www.trinity.edu/rjensen/000aaa/0000start.htm

More on this topic appears in the module below.


"Nationally Recognized Assessment and Higher Education Study Center Findings as Resources for Assessment Projects," by Tracey Sutherland, Accounting Education News, 2007 Winter Issue, pp. 5-7

While nearly all accounting programs are wrestling with various kinds of assessment initiatives to meet local assessment plans and/or accreditation needs, most colleges and universities participate in larger assessment projects whose results may not be shared at the College/School level. There may be information available on your campus through campus-level assessment and institutional research that generate data that could be useful for your accounting program/school assessment initiatives. Below are examples of three such research projects, and some of their recent findings about college students.

Some things in the The 2006 Report of the National Survey of Student Engagement especially caught my eye:

Promising Findings from the National Surveyof Student Engagement

• Student engagement is positively related to first-year and senior student grades and to persistence between the first and second year of college.

• Student engagement has compensatory effects on grades andpersistence of students from historically underserved backgrounds.

• Compared with campus-basedstudents, distance education learners reported higher levels ofacademic challenge, engaged more often in deep learning activities, and reported greater developmental gains from college.

• Part-time working students reported grades comparable to other students and also perceived the campus to be as supportive of their academic and social needs as theirnon-working peers.

• Four out of five beginning college students expected that reflective learning activities would be an important part of their first-year experience.

Disappointing Findings from the National

Survey of Student Engagement

• Students spend on average only about 13–14 hours a week preparingfor class, far below what faculty members say is necessary to do well in their classes.

• Students study less during the first year of college than they expected to at the start of the academic year.

• Women are less likely than men to interact with faculty members outside of class including doing research with a faculty member.

• Distance education students are less involved in active and collaborative learning.

• Adult learners were much lesslikely to have participated in such enriching educational activities as community service, foreign language study, a culminating senior experience, research with faculty,and co-curricular activities.

• Compared with other students, part-time students who are working had less contact with facultyand participated less in active and collaborative learning activities and enriching educational experiences.

Some additional 2006 NSSE findings

• Distance education studentsreported higher levels of academic challenge, and reported engaging more often in deep learning activities such as the reflective learning activities. They also reported participating less in collaborative learning experiences and worked more hours off campus.

• Women students are more likely to be engaged in foreign language coursework.

• Male students spent more time engaged in working with classmates on projects outside of class.

• Almost half (46%) of adult students were working more than 30 hours per week and about three-fourths were caring for dependents. In contrast, only 3% of traditional age students worked more than 30 hours per week, and about four fifths spend no time caring for dependents.

Bob Jensen's threads on higher education controversies are at http://www.trinity.edu/rjensen/HigherEdControversies.htm


Online Versus Onsite for Students

"Students prefer online courses:  Classes popular with on-campus students," CNN, January 13, 2006 --- http://www.cnn.com/2006/EDUCATION/01/13/oncampus.online.ap/index.html

At least 2.3 million people took some kind of online course in 2004, according to a recent survey by The Sloan Consortium, an online education group, and two-thirds of colleges offering "face-to-face" courses also offer online ones. But what were once two distinct types of classes are looking more and more alike -- and often dipping into the same pool of students.

At some schools, online courses -- originally intended for nontraditional students living far from campus -- have proved surprisingly popular with on-campus students. A recent study by South Dakota's Board of Regents found 42 percent of the students enrolled in its distance-education courses weren't so distant: they were located on campus at the university that was hosting the online course.

Numbers vary depending on the policies of particular colleges, but other schools also have students mixing and matching online and "face-to-face" credits. Motives range from lifestyle to accommodating a job schedule to getting into high-demand courses.

Classes pose challenges Washington State University had about 325 on-campus undergraduates taking one or more distance courses last year. As many as 9,000 students took both distance and in-person classes at Arizona State Univesity last year.

"Business is really about providing options to their customers, and that's really what we want to do," said Sheila Aaker, extended services coordinator at Black Hills State.

Still, the trend poses something of a dilemma for universities.

They are reluctant to fill slots intended for distance students with on-campus ones who are just too lazy to get up for class. On the other hand, if they insist the online courses are just as good, it's hard to tell students they can't take them. And with the student population rising and pressing many colleges for space, they may have little choice.

In practice, the policy is often shaded. Florida State University tightened on-campus access to online courses several years ago when it discovered some on-campus students hacking into the system to register for them. Now it requires students to get an adviser's permission to take an online class.

Online, in-person classes blending Many schools, like Washington State and Arizona State, let individual departments and academic units decide who can take an online course. They say students with legitimate academic needs -- a conflict with another class, a course they need to graduate that is full -- often get permission, though they still must take some key classes in person.

In fact, the distinction between online and face-to-face courses is blurring rapidly. Many if not most traditional classes now use online components -- message boards, chat rooms, electronic filing of papers. Students can increasingly "attend" lectures by downloading a video or a podcast.

At Arizona State, 11,000 students take fully online courses and 40,000 use the online course management system, which is used by many "traditional" classes. Administrators say the distinction between online and traditional is now so meaningless it may not even be reflected in next fall's course catalogue.

Arizone State's director of distance learning, Marc Van Horne, says students are increasingly demanding both high-tech delivery of education, and more control over their schedules. The university should do what it can to help them graduate on time, he says.

"Is that a worthwhile goal for us to pursue? I'd say 'absolutely,"' Van Horne said. "Is it strictly speaking the mission of a distance learning unit? Not really."

Then there's the question of whether students are well served by taking a course online instead of in-person. Some teachers are wary, saying showing up to class teaches discipline, and that lectures and class discussions are an important part of learning.

But online classes aren't necessarily easier. Two-thirds of schools responding to a recent survey by The Sloan Consortium agreed that it takes more discipline for students to succeed in an online course than in a face-to-face one.

"It's a little harder to get motivated," said Washington State senior Joel Gragg, who took two classes online last year (including "the psychology of motivation"). But, he said, lectures can be overrated -- he was still able to meet with the professor in person when he had questions -- and class discussions are actually better online than in a college classroom, with a diverse group exchanging thoughtful postings.

"There's young people, there's old people, there's moms, professional people," he said. "You really learn a lot more."

Bob Jensen's threads on distance education and training alternatives are at
http://www.trinity.edu/rjensen/crossborder.htm

 


The 2006 National Survey of Student Engagement, released November 13, 2006, for the first time offers a close look at distance education, offering provocative new data suggesting that e-learners report higher levels of engagement, satisfaction and academic challenge than their on-campus peers --- http://nsse.iub.edu/NSSE_2006_Annual_Report/index.cfm

"The Engaged E-Learner," by Elizabeth Redden, Inside Higher Ed, November 13, 2006 --- http://www.insidehighered.com/news/2006/11/13/nsse

The 2006 National Survey of Student Engagement, released today, for the first time offers a close look at distance education, offering provocative new data suggesting that e-learners report higher levels of engagement, satisfaction and academic challenge than their on-campus peers.

Beyond the numbers, however, what institutions choose to do with the data promises to attract extra attention to this year’s report.

NSSE is one of the few standardized measures of academic outcomes that most officials across a wide range of higher education institutions agree offers something of value.Yet NSSE does not release institution-specific data, leaving it to colleges to choose whether to publicize their numbers.

Colleges are under mounting pressure, however, to show in concrete, measurable ways that they are successfully educating students, fueled in part by the recent release of the report from the Secretary of Education’s Commission on the Future of Higher Education, which emphasizes the need for the development of comparable measures of student learning. In the commission’s report and in college-led efforts to heed the commission’s call, NSSE has been embraced as one way to do that. In this climate, will a greater number of colleges embrace transparency and release their results?

Anywhere between one-quarter and one-third of the institutions participating in NSSE choose to release some data, said George Kuh, NSSE’s director and a professor of higher education at Indiana University at Bloomington. But that number includes not only those institutions that release all of the data, but also those that pick and choose the statistics they’d like to share.

In the “Looking Ahead” section that concluded the 2006 report, the authors note that NSSE can “contribute to the higher education improvement and accountability agenda,” teaming with institutions to experiment with appropriate ways to publicize their NSSE data and developing common templates for colleges to use. The report cautions that the data released for accountability purposes should be accompanied by other indicators of student success, including persistence and graduation rates, degree/certificate completion rates and measurements of post-college endeavors.

“Has this become a kind of a watershed moment when everybody’s reporting? No. But I think what will happen as a result of the Commission on the Future of Higher Ed, Secretary (Margaret) Spelling’s workgroup, is that there is now more interest in figuring out how to do this,” Kuh said.

Charles Miller, chairman of the Spellings commission, said he understands that NSSE’s pledge not to release institutional data has encouraged colleges to participate — helping the survey, first introduced in 1999, get off the ground and gain wide acceptance. But Miller said he thinks that at this point, any college that chooses to participate in NSSE should make its data public.

“Ultimately, the duty of the colleges that take public funds is to make that kind of data public. It’s not a secret that the people in the academy ought to have. What’s the purpose of it if it’s just for the academy? What about the people who want to get the most for their money?”

Participating public colleges are already obliged to provide the data upon request, but Miller said private institutions, which also rely heavily on public financial aid funds, should share that obligation.

Kuh said that some colleges’ reluctance to publicize the data stems from a number of factors, the primary reason being that they are not satisfied with the results and feel they might reflect poorly on the institution.

In addition, some college officials fear that the information, if publicized, may be misused, even conflated to create a rankings system. Furthermore, sharing the data would represent a shift in the cultural paradigm at some institutions used to keeping sensitive data to themselves, Kuh said.

“The great thing about NSSE and other measures like it is that it comes so close to the core of what colleges and universities are about — teaching and learning. This is some of the most sensitive information that we have about colleges and universities,” Kuh said.

But Miller said the fact that the data get right to the heart of the matter is precisely why it should be publicized. “It measures what students get while they’re at school, right? If it does that, what’s the fear of publishing it?” Miller asked. “If someone would say, ‘It’s too hard to interpret,’ then that’s an insult to the public.” And if colleges are afraid of what their numbers would suggest, they shouldn’t participate in NSSE at all, Miller said.

However, Douglas Bennett, president of Earlham College in Indiana and chair of NSSE’s National Advisory Board, affirmed NSSE’s commitment to opening survey participation to all institutions without imposing any pressure that they should make their institutional results public. “As chair of the NSSE board, we believe strongly that institutions own their own data and what they do with it is up to them. There are a variety of considerations institutions are going to take into account as to whether or not they share their NSSE data,” Bennett said.

However, as president of Earlham, which releases all of its NSSE data and even releases its accreditation reports, Bennett said he thinks colleges, even private institutions, have a professional and moral obligation to demonstrate their effectiveness in response to accountability demands — through NSSE or another means a college might deem appropriate.

This Year’s Survey

The 2006 NSSE survey, which is based on data from 260,000 randomly-selected first-year and senior students at 523 four-year institutions(NSSE’s companion survey, the Community College Survey of Student Engagement, focuses on two-year colleges) looks much more deeply than previous iterations of the survey did into the performance of online students.

Distance learning students outperform or perform on par with on-campus students on measures including level of academic challenge; student-faculty interaction; enriching educational experiences; and higher-order, integrative and reflective learning; and gains in practical competence, personal and social development, and general education. They demonstrate lower levels of engagement when it comes to active and collaborative learning.

Karen Miller, a professor of education at the University of Louisville who studies online learning, said the results showing higher or equal levels of engagement among distance learning students make sense: “If you imagine yourself as an undergraduate in a fairly large class, you can sit in that class and feign engagement. You can nod and make eye contact; your mind can be a million miles away. But when you’re online, you’ve got to respond, you’ve got to key in your comments on the discussion board, you’ve got to take part in the group activities.

Plus, Miller added, typing is a more complex psycho-motor skill than speaking, requiring extra reflection. “You see what you have said, right in front of your eyes, and if you realize it’s kind of half-baked you can go back and correct it before you post it.”

Also, said Kuh, most of the distance learners surveyed were over the age of 25. “Seventy percent of them are adult learners. These folks are more focused; they’re better able to manage their time and so forth,” said Kuh, who added that many of the concerns surrounding distance education focus on traditional-aged students who may not have mastered their time management skills.

Among other results from the 2006 NSSE survey:

Bob Jensen's threads on distance education and training alternatives around the world are at http://www.trinity.edu/rjensen/Crossborder.htm


Soaring Popularity of E-Learning Among Students But Not Faculty
How many U.S. students took at least on online course from a legitimate college in Fall 2005?

More students are taking online college courses than ever before, yet the majority of faculty still aren’t warming up to the concept of e-learning, according to a national survey from the country’s largest association of organizations and institutions focused on online education . . . ‘We didn’t become faculty to sit in front of a computer screen,’
Elia Powers, "Growing Popularity of E-Learning, Inside Higher Ed, November 10, 2006 --- http://www.insidehighered.com/news/2006/11/10/online

More students are taking online college courses than ever before, yet the majority of faculty still aren’t warming up to the concept of e-learning, according to a national survey from the country’s largest association of organizations and institutions focused on online education.

Roughly 3.2 million students took at least one online course from a degree-granting institution during the fall 2005 term, the Sloan Consortium said. That’s double the number who reported doing so in 2002, the first year the group collected data, and more than 800,000 above the 2004 total. While the number of online course participants has increased each year, the rate of growth slowed from 2003 to 2004.

The report, a joint partnership between the group and the College Board, defines online courses as those in which 80 percent of the content is delivered via the Internet.

The Sloan Survey of Online Learning, “Making the Grade: Online Education in the United States, 2006,” shows that 62 percent of chief academic officers say that the learning outcomes in online education are now “as good as or superior to face-to-face instruction,” and nearly 6 in 10 agree that e-learning is “critical to the long-term strategy of their institution.” Both numbers are up from a year ago.

Researchers at the Sloan Consortium, which is administered through Babson College and Franklin W. Olin College of Engineering, received responses from officials at more than 2,200 colleges and universities across the country. (The report makes few references to for-profit colleges, a force in the online market, in part because of a lack of survey responses from those institutions.)

Much of the report is hardly surprising. The bulk of online students are adult or “nontraditional” learners, and more than 70 percent of those surveyed said online education reaches students not served by face-to-face programs.

What stands out is the number of faculty who still don’t see e-learning as a valuable tool. Only about one in four academic leaders said that their faculty members “accept the value and legitimacy of online education,” the survey shows. That number has remained steady throughout the four surveys. Private nonprofit colleges were the least accepting — about one in five faculty members reported seeing value in the programs.

Elaine Allen, co-author of the report and a Babson associate professor of statistics and entrepreneurship, said those numbers are striking.

“As a faculty member, I read that response as, ‘We didn’t become faculty to sit in front of a computer screen,’ ” Allen said. “It’s a very hard adjustment. We sat in lectures for an hour when we were students, but there’s a paradigm shift in how people learn.”

Barbara Macaulay, chief academic officer at UMass Online, which offers programs through the University of Massachusetts, said nearly all faculty members teaching the online classes there also teach face-to-face courses, enabling them to see where an online class could fill in the gap (for instance, serving a student who is hesitant to speak up in class).

She said she isn’t surprised to see data illustrating the growing popularity of online courses with students, because her program has seen rapid growth in the last year. Roughly 24,000 students are enrolled in online degree and certificate courses through the university this fall — a 23 percent increase from a year ago, she said.

“Undergraduates see it as a way to complete their degrees — it gives them more flexibility,” Macaulay said.

The Sloan report shows that about 80 percent of students taking online courses are at the undergraduate level. About half are taking online courses through community colleges and 13 percent through doctoral and research universities, according to the survey.

Nearly all institutions with total enrollments exceeding 15,000 students have some online offerings, and about two-thirds of them have fully online programs, compared with about one in six at the smallest institutions (those with 1,500 students or fewer), the report notes. Allen said private nonprofit colleges are often set in enrollment totals and not looking to expand into the online market.

The report indicates that two-year colleges are particularly willing to be involved in online learning.

“Our institutions tend to embrace changes a little more readily and try different pedagogical styles,” said Kent Phillippe, a senior research associate at the American Association of Community Colleges. The report cites a few barriers to what it calls the “widespread adoption of online learning,” chief among them the concern among college officials that some of their students lack the discipline to succeed in an online setting. Nearly two-thirds of survey respondents defined that as a barrier.

Allen, the report’s co-author, said she thinks that issue arises mostly in classes in which work can be turned in at any time and lectures can be accessed at all hours. “If you are holding class in real time, there tends to be less attrition,” she said. The report doesn’t differentiate between the live and non-live online courses, but Allen said she plans to include that in next year’s edition.

Few survey respondents said acceptance of online degrees by potential employers was a critical barrier — although liberal arts college officials were more apt to see it as an issue.

November 10, 2006 reply from John Brozovsky [jbrozovs@vt.edu]

Hi Bob:

One reason why might be what I have seen. The in residence accounting students that I talk with take online classes here because they are EASY and do not take much work. This would be very popular with students but not generally so with faculty.

John

November 10, 2006 reply from Bob Jensen

Hi John,

Then there is a quality control problem whereever this is a fact. It would be a travesty if any respected college had two or more categories of academic standards or faculty assignments.

Variations in academic standards have long been a problem between part-time versus full-time faculty, although grade inflation can be higher or lower among part-time faculty. In one instance, it’s the tenure-track faculty who give higher grades because they're often more worried about student evaluations. At the opposite extreme it is part-time faculty who give higher grades for many reasons that we can think of if we think about it.

One thing that I'm dead certain about is that highly motivated students tend to do better in online courses ceteris paribus. Reasons are mainly that time is used more efficiently in getting to class (no wasted time driving or walking to class), less wasted time getting teammates together on team projects, and fewer reasons for missing class.

Also online alternatives offer some key advantages for certain types of handicapped students --- http://www.trinity.edu/rjensen/000aaa/thetools.htm 

My opinions on learning advantages of E-Learning were heavily influenced by the most extensive and respected study of online versus onsite learning experiments in the SCALE experiments using full-time resident students at the University of Illinois --- http://www.trinity.edu/rjensen/255wp.htm#Illinois 

In the SCALE experiments cutting across 30 disciplines, it was generally found that motivated students learned better online then their onsite counterparts having the same instructors. However, there was no significant impact on students who got low grades in online versus onsite treatment groups.

I think the main problem with faculty is that online teaching tends to burn out instructors more frequently than onsite instructors. This was also evident in the SCALE experiments. When done correctly, online courses are more communication intent between instructors and faculty. Also, online learning takes more preparation time if it is done correctly. 

My hero for online learning is still Amy Dunbar who maintains high standards for everything:

http://www.cs.trinity.edu/~rjensen/002cpe/02start.htm

http://www.trinity.edu/rjensen/book01q4.htm#Dunbar

Bob Jensen

November 10, 2006 reply from John Brozovsky [jbrozovs@vt.edu]

Hi Bob:

Also why many times it is not done 'right'. Not done right they do not get the same education. Students generally do not complain about getting 'less for their money'. Since we do not do online classes in department the ones the students are taking are the university required general education and our students in particular are not unhappy with being shortchanged in that area as they frequently would have preferred none anyway.

John

 

Bob Jensen's threads on open sharing and education technology are at http://www.trinity.edu/rjensen/000aaa/0000start.htm

Bob Jensen's threads on online training and education alternatives are at http://www.trinity.edu/rjensen/crossborder.htm

Motivations for Distance Learning --- http://www.trinity.edu/rjensen/000aaa/updateee.htm#Motivations

Bob Jensen's threads on the dark side of online learning and teaching are at http://www.trinity.edu/rjensen/000aaa/theworry.htm


October 5, 2006 message from Carolyn Kotlas [kotlas@email.unc.edu]

STUDENTS' PERCEPTIONS OF ONLINE LEARNING

"The ultimate question for educational research is how to optimize instructional designs and technology to maximize learning opportunities and achievements in both online and face-to-face environments." Karl L.Smart and James J. Cappel studied two undergraduate courses -- an elective course and a required course -- that incorporated online modules into traditional classes. Their research of students' impressions and satisfaction with the online portions of the classes revealed mixed results:

-- "participants in the elective course rated use of the learning modules slightly positive while students in the required course rated them slightly negative"

-- "while students identified the use of simulation as the leading strength of the online units, it was also the second most commonly mentioned problem of these units"

-- "students simply did not feel that the amount of time it took to complete the modules was worth what was gained"

The complete paper, "Students' Perceptions of Online Learning: A Comparative Study" (JOURNAL OF INFORMATION TECHNOLOGY EDUCATION, vol. 5, 2006, pp. 201-19), is available online at http://jite.org/documents/Vol5/v5p201-219Smart54.pdf.

Current and back issues of the Journal of Information Technology Education (JITE) [ISSN 1539-3585 (online) 1547-9714 (print)] are available free of charge at http://jite.org/. The peer-reviewed journal is published annually by the Informing Science Institute. For more information contact: Informing Science Institute, 131 Brookhill Court, Santa Rosa, California 95409 USA; tel: 707-531-4925; fax: 480-247-5724;

Web: http://informingscience.org/.



I have heard some faculty argue that asynchronous Internet courses just do not mesh with Trinity's on-campus mission. The Scale Experiments at the University of Illinois indicate that many students learn better and prefer online courses even if they are full-time, resident students. The University of North Texas is finding out the same thing. There may be some interest in what our competition may be in the future even for full-time, on-campus students at private as well as public colleges and universities.
On January 17, 2003, Ed Scribner forwarded this article from The Dallas Morning News

Students Who Live on Campus Choosing Internet Courses Syndicated From: The Dallas Morning News

DALLAS - Jennifer Pressly could have walked to a nearby lecture hall for her U.S. history class and sat among 125 students a few mornings a week.

But the 19-year-old freshman at the University of North Texas preferred rolling out of bed and attending class in pajamas at her dorm-room desk. Sometimes she would wait until Saturday afternoon.

The teen from Rockwall, Texas, took her first college history class online this fall semester. She never met her professor and knew only one of her 125 classmates: her roommate.

"I take convenience over lectures," she said. "I think I would be bored to death if I took it in lecture."

She's part of a controversial trend that has surprised many university officials across the country. Given a choice, many traditional college students living on campus pick an online course. Most universities began offering courses via the Internet in the late 1990s to reach a different audience - older students who commute to campus and are juggling a job and family duties.

During the last year, UNT began offering an online option for six of its highest-enrollment courses that are typically taught in a lecture hall with 100 to 500 students. The online classes, partly offered as a way to free up classroom space in the growing school, filled up before pre-registration ended, UNT officials said. At UNT, 2,877 of the about 23,000 undergraduates are taking at least one course online.

Nationwide, colleges are reporting similar experiences, said Sally Johnstone, director of WCET, a Boulder, Colo., cooperative of state higher education boards and universities that researches distance education. Kansas State University, in a student survey last spring, discovered that 80 percent of its online students were full-time and 20 percent were part-time, the opposite of the college's expectations, Johnstone said.

"Why pretend these kids want to be in a class all the time? They don't, but kids don't come to campus to sit in their dorm rooms and do things online exclusively," she said. "We're in a transition, and it's a complex one."

The UT Telecampus, a part of the University of Texas System that serves 15 universities and research facilities, began offering online undergraduate classes in state-required courses two years ago. Its studies show that 80 percent of the 2,260 online students live on campus, and the rest commute.

Because they are restricted to 30 students each, the UT System's online classes are touted as a more intimate alternative to lecture classes, said Darcy Hardy, director of the UT Telecampus.

"The freshman-sophomore students are extremely Internet-savvy and understand more about online options and availability than we could have ever imagined," Hardy said.

Online education advocates say professors can reach students better online than in lecture classes because of the frequent use of e-mail and online discussion groups. Those who oppose the idea say they worry that undergraduates will miss out on the debate, depth and interaction of traditional classroom instruction.

UNT, like most colleges, is still trying to figure out the effect on its budget. The professorial salary costs are the same, but an online course takes more money to develop. The online students, however, free up classroom space and eliminate the need for so many new buildings in growing universities. The price to enroll is typically the same for students, whether they go to a classroom or sit at their computer.

Mike Campbell, a history professor at UNT for 36 years, does not want to teach an online class, nor does he approve of offering undergraduate history via the Internet.

"People shouldn't be sitting in the dorms doing this rather than walking over here," he said. "That is based on a misunderstanding of what matters in history."

In his class of 125, he asks students rhetorical questions they answer en masse to be sure they're paying attention, he said. He goes beyond the textbook, discussing such topics as the moral and legal issues surrounding slavery.

He said he compares the online classes to the correspondence courses he hated but had to teach when he came to UNT in 1966. Both methods are too impersonal, he said, recalling how he mailed assignments and tests to correspondence students.

UNT professors who teach online say the courses are interactive, unlike correspondence courses.

Matt Pearcy has lectured 125 students for three hours at a time.

"You'd try to be entertaining," he said. "You have students who get bored after 45 minutes, no matter what you're doing. They're filling out notes, doing their to-do list, reading their newspaper in front of you."

In his online U.S. history class at UNT, students get two weeks to finish each lesson. They read text, complete click-and-drag exercises, like one that matches terms with historical figures, and take quizzes. They participate in online discussions and group projects, using e-mail to communicate.

"Hands-down, I believe this is a more effective way to teach," said Pearcy, who is based in St. Paul, Minn. "In this setting, they go to the class when they're ready to learn. They're interacting, so they're paying attention."

Pressly said she liked the hands-on work in the online class. She could do crossword puzzles to reinforce her history lessons. Or she could click an icon and see what Galileo saw through his telescope in the 17th century.

"I took more interest in this class than the other ones," she said.

The class, though, required her to be more disciplined, she said, and that added stress. Two weeks in a row, she waited till 11:57 p.m. Sunday - three minutes before the deadline - to turn in her assignment.

Online courses aren't for everybody.

"The thing about sitting in my dorm, there's so much to distract me," said Trevor Shive, a 20-year-old freshman at UNT. "There's the Internet. There's TV. There's radio."

He said students on campus should take classes in the real, not virtual, world.

"They've got legs; they can walk to class," he said.

Continued in the article at http://www.dallasnews.com/ 


January 17, 2003 response from John L. Rodi [jrodi@IX.NETCOM.COM

I would have added one additional element. Today I think too many of us tend to teach accounting the way you teach drivers education. Get in the car turn on the key and off you go. If something goes wrong with the car you a sunk since you nothing conceptually. Furthermore, it makes you a victim of those who do. Conceptual accounting education teaches you to respond to choices, that is not only how to drive but what to drive. Thanks for the wonderful analogy.

John Rodi 
El Camino College

January 21 reply from 

On the subject of technology and teaching accounting, I wonder how many of you are in the SAP University Alliance and using it for accounting classes. I just teach advanced financial accounting, and have not found a use for it there. However, I have often felt that there is a place for it in intro financial, in managerial and in AIS. On the latter, there is at least one good text book containing SAP exercises and problems.

Although there are over 400 universities in the world in the program, one of the areas where use is lowest is accounting courses. The limitation appears to be related to a combination of the learning curve for professors, together with an uncertainty as to how it can be used to effectively teach conceptual material or otherwise fit into curricula.

Gerald Trites, FCA 
Professor of Accounting and Information Systems 
St Francis Xavier University 
Antigonish, Nova Scotia 
Website
- http://www.stfx.ca/people/gtrites 

The SAP University Alliance homepage is at http://www.sap.com/usa/company/ua/ 

In today's fast-paced, technically advanced society, universities must master the latest technologies, not only to achieve their own business objectives cost-effectively but also to prepare the next generation of business leaders. To meet the demands for quality teaching, advanced curriculum, and more technically sophisticated graduates, your university is constantly searching for innovative ways of acquiring the latest information technology while adhering to tight budgetary controls.

SAP can help. A world leader in the development of business software, SAP is making its market-leading, client/server-based enterprise software, the R/3® System, available to the higher education community. Through our SAP University Alliance Program, we are proud to offer you the world's most popular software of its kind for today's businesses. SAP also provides setup, follow-up consulting, and R/3 training for faculty - all at our expense. The SAP R/3 System gives you the most advanced software capabilities used by businesses of all sizes and in all industries around the world.

There are many ways a university can benefit from an educational alliance with SAP. By partnering with SAP and implementing the R/3 System, your university can:


January 6, 2006 message from Carolyn Kotlas [kotlas@email.unc.edu]

No Significant Difference Phenomenon website http://www.nosignificantdifference.org/ 

The website is a companion piece to Thomas L. Russell's book THE NO SIGNIFICANT DIFFERENCE PHENOMENON, a bibliography of 355 research reports, summaries, and papers that document no significant differences in student outcomes between alternate modes of education delivery.


DISTANCE LEARNING AND FACULTY CONCERNS

Despite the growing number of distance learning programs, faculty are often reluctant to move their courses into the online medium. In "Addressing Faculty Concerns About Distance Learning" (ONLINE JOURNAL OF DISTANCE LEARNING ADMINISTRATION, vol. VIII, no. IV, Winter 2005) Jennifer McLean discusses several areas that influence faculty resistance, including: the perception that technical support and training is lacking, the fear of being replaced by technology, and the absence of a clearly-understood institutional vision for distance learning. The paper is available online at
http://www.westga.edu/%7Edistance/ojdla/winter84/mclean84.htm

The Online Journal of Distance Learning Administration is a free, peer-reviewed quarterly published by the Distance and Distributed Education Center, The State University of West Georgia, 1600 Maple Street, Carrollton, GA 30118 USA; Web: http://www.westga.edu/~distance/jmain11.html .

 


December 10, 2004 message from Carolyn Kotlas [kotlas@email.unc.edu

E-LEARNING ONLINE PRESENTATIONS

The University of Calgary Continuing Education sponsors Best Practices in E-Learning, a website that provides a forum for anyone working in the field to share their best practices. This month's presentations include:

-- "To Share or Not To Share: There is No Question" by Rosina Smith Details a new model for permitting "the reuse, multipurposing, and repurposing of existing content"

-- "Effective Management of Distributed Online Educational Content" by Gary Woodill "[R]eviews the history of online educational content, and argues that the future is in distributed content learning management systems that can handle a wide diversity of content types . . . identifies 40 different genres of online educational content (with links to examples)"

Presentations are in various formats, including Flash, PDF, HTML, and PowerPoint slides. Registered users can interact with the presenters and post to various discussion forums on the website. There is no charge to register and view presentations. You can also subscribe to their newsletter which announces new presentations each month. (Note: No archive of past months' presentations appears to be on the website.)

For more information, contact: Rod Corbett, University of Calgary Continuing Education; tel:403-220-6199 or 866-220-4992 (toll-free); email: rod.corbett@ucalgary.ca ; Web: http://elearn.ucalgary.ca/showcase/


NEW APPROACHES TO EVALUATING ONLINE LEARNING

"The clear implication is that online learning is not good enough and needs to prove its worth before gaining full acceptance in the pantheon of educational practices. This comparative frame of reference is specious and irrelevant on several counts . . ." In "Escaping the Comparison Trap: Evaluating Online Learning on Its Own Terms (INNOVATE, vol. 1, issue 2, December 2004/January 2005), John Sener writes that, rather than being inferior to classroom instruction, "[m]any online learning practices have demonstrated superior results or provided access to learning experiences not previously possible." He describes new evaluation models that are being used to judge online learning on its own merits. The paper is available online at http://www.innovateonline.info/index.php?view=article&id=11&action=article.

You will need to register on the Innovate website to access the paper; there is no charge for registration and access.

Innovate [ISSN 1552-3233] is a bimonthly, peer-reviewed online periodical published by the Fischler School of Education and Human Services at Nova Southeastern University. The journal focuses on the creative use of information technology (IT) to enhance educational processes in academic, commercial, and government settings. Readers can comment on articles, share material with colleagues and friends, and participate in open forums. For more information, contact James L. Morrison, Editor-in-Chief, Innovate; email: innovate@nova.edu ; Web: http://www.innovateonline.info/.

 


I read the following for a scheduled program of the 29th Annual Accounting Education Conference, October 17-18, 2003  Sponsored by the Texas CPA Society, San Antonio Airport Hilton.

WEB-BASED AND FACE-TO-FACE INSTRUCTION:
    A COMPARISON OF LEARNING OUTCOMES IN A FINANCIAL ACCOUNTING COURSE

Explore the results of a study conducted over a four-semester period that focused on the same graduate level financial accounting course that was taught using web-based instruction and face-to-face instruction.  Discuss the comparison of student demographics and characteristics, course satisfaction, and comparative statistics related to learning outcomes.

Doug Rusth/associate professor/University of Houston at Clear Lake/Clear Lake


Bob Jensen's threads on asynchronous versus synchronous learning are at http://www.trinity.edu/rjensen/255wp.htm 
Note in particular the research outcomes of The Scale Experiment at the University of Illinois --- http://www.trinity.edu/rjensen/255wp.htm#Illinois 

Once again, my advice to new faculty is at http://www.trinity.edu/rjensen/000aaa/newfaculty.htm 


Issues in Group Grading

December 6, 2004 message from Glen Gray [glen.gray@CSUN.EDU

When I have students do group projects, I require each team member complete a peer review form where the team member evaluates the other team members on 8 attributes using a scale from 0 to 4. On this form they also give their team members an overall grade. In a footnote it is explained that an “A” means the team member receives the full team grade; a “B” means a 10% reduction from the team grade; a “C” means 20% discount; a “D” means 30% discount; “E” means 40%, and an “F” means a 100% discount (in other words, the team member should get a zero).

I assumed that the form added a little peer pressure to the team work process. In the past, students were usually pretty kind to each other. But now I have a situation where the team members on one team have all given either E’s of F’s to one of their team members. Their written comments about this guy are all pretty consistent.

Now, I worried if I actually enforce the discount scale, things are going to get messy and the s*** is going to hit the fan. I’m going to have one very upset student. He is going to be mad at his fellow teammates.

Has anyone had similar experience? What has the outcome been? Is there a confidentially issue here? In other words, are the other teammates also going to be upset that I revealed their evaluations? Is there going to be a lawsuit coming over the horizon?

Glen L. Gray, PhD, CPA
Dept. of Accounting & Information Systems
College of Business & Economics
California State University, Northridge
Northridge, CA 91330-8372
http://www.csun.edu/~vcact00f 

Most of the replies to the message above encouraged being clear at the beginning that team evaluations would affect the final grade and then sticking to that policy.

December 5, 2004 reply from David Fordham, James Madison University [fordhadr@JMU.EDU

Glen, the fact that you are in California, by itself, makes it much more difficult to predict the lawsuit question. I've seen some lawsuits (and even worse, legal outcomes) from California that are completely unbelievable... Massachussetts too.

But that said, I can share my experience that I have indeed given zero points on a group grade to students where the peer evaluations indicated unsatisfactory performance. My justification to the students in these "zero" cases has always been, "it was clear from your peers that you were not part of the group effort, and thus have not earned the points for the group assignment".

I never divulge any specific comments, but I do tell the student that I am willing to share the comments with an impartial arbiter if they wish to have a third party confirm my evidence. To date, no student has ever contested the decision.

Every other semester or so, I have to deduct points to some degree for unsatisfactory work as judged by peers. So far, I've had no problems making it stick, and in most cases, the affected student willingly admits their deficiency, although usually with excuses and rationales.

But I'm not in California, and the legal precedents here are unlike those in your neck of the woods.

If I were on the west coast, however, I'd probably be likely to at least try to stick to my principles as far as my university legal counsel would allow. Then, if my counsel didn't support me, I'd look for employment in a part of the country with a more reasonable legal environment (although that is getting harder to find every day).

Good luck,

David Fordham

December 5, 2004 reply from Amy Dunbar

Sometimes groups do blow up. Last summer I had one group ask me to remove a member. Another group had a nonfunctioning member, based on the participation scores. I formed an additional group comprised of just those two. They finally learned how to work. Needless to say they weren’t happy with me, but the good thing about teaching is that every semester we get a fresh start!

Another issue came up for the first time, at least that I noticed. I learned that one group made a pact to rate each other high all semester long regardless of work level, and I still am not sure how I am going to avoid that problem next time around. The agreement came to light when one of the students was upset that he did so poorly on my exams. He told his senior that he had no incentive to do the homework because he could just get the answers from the other group members, and he didn’t have to worry about being graded down because of the agreement. The student was complaining that the incentive structure I set up hurt him because he needed more push do the homework. The senior told me after the class ended. Any suggestions?

TEXAS IS GOING TO THE ROSE BOWL!!!!!!!!! Go Horns! Oops, that just slipped out.

Amy Dunbar
A Texas alum married to a Texas fanatic

December 6, 2004 reply from Tracey Sutherland [tracey@AAAHQ.ORG

Glen, My first thought on reading your post was that if things get complicated it could be useful to have a context for your grading policy that clearly establishes that it falls within common practice (in accounting and in cooperative college classrooms in general). Now you've already built some context from within accounting by gathering some responses here from a number of colleagues for whom this is a regular practice. Neal's approach can be a useful counterpart to peer evaluation for triangulation purposes -- sometimes students will report that they weren't really on-point for one reason or another (I've done this with good result but only with upper-level grad students). If the issue becomes more complicated because the student challenges your approach up the administrative ladder, you could provide additional context for the consistency of your approach in general by referencing the considerable body of literature on these issues in the higher education research literature -- you are using a well-established approach that's been frequently tested. A great resource if you need it is Barbara Millis and Phil Cottell's book "Cooperative Learning for Higher Education Faculty" published by Oryx Press (American Council on Education Series on Higher Education). They do a great job of annotating the major work in the area in a short, accessible, and concise book that also includes established criteria used for evaluating group work and some sample forms for peer assessment and self-assessment for group members (also just a great general resource for well-tested cooperative/group activities -- and tips for how to manage implementing them). Phil Cottell is an accounting professor (Miami U.) and would be a great source of information should you need it.

Your established grading policy indicates that there would be a reduction of grade when team members give poor peer evaluations -- which wouldn't necessarily mean that you would reveal individual's evaluations but that a negative aggregate evaluation would have an effect -- and that would protect confidentiality consistently with your policy. It seems an even clearer case because all group members have given consistently negative evaluations -- as long as it's not some weird interpersonal thing -- something that sounds like that would be a red flag for the legal department. I hate it that we so often worry about legal ramifications . . . but then again it pays to be prepared!

Peace of the season, 

Tracey

December 6, 2004 reply from Bob Jensen

I once listened to an award winning AIS professor from a very major university (that after last night won't be going to the Orange Bowl this year) say that the best policy is to promise everybody an A in the course.  My question then is what the point of the confidential evaluations would be other than to make the professor feel bad at the end of the course?

Bob Jensen


Too Good to Grade:  How can these students get into doctoral programs and law school if their prestigious universities will not disclose grades and class rankings?  Why grade at all in this case?
Students at some top-ranked B-schools have a secret. It's something they can't share even if it means losing a job offer. It's one some have worked hard for and should be proud of, but instead they keep it to themselves. The secret is their grades.
At four of the nation's 10 most elite B-schools -- including Harvard, Stanford, and Chicago -- students have adopted policies that prohibit them or their schools from disclosing grades to recruiters. The idea is to reduce competitiveness and eliminate the risk associated with taking difficult courses. But critics say the only thing nondisclosure reduces is one of the most important lessons B-schools should teach: accountability (see BusinessWeek, 9/12/05, "Join the Real World, MBAs"). It's a debate that's flaring up on B-school campuses across the country. (For more on this topic, log on to our B-Schools Forum.)  And nowhere is it more intense than at University of Pennsylvania's Wharton School, where students, faculty, and administrators have locked horns over a school-initiated proposal that would effectively end a decade of grade secrecy at BusinessWeek's No. 3-ranked B-school. It wouldn't undo disclosure rules but would recognize the top 25% of each class -- in effect outing everyone else. It was motivated, says Vice-Dean Anjani Jain in a recent Wharton Journal article, by the "disincentivizing effects" of grade nondisclosure, which he says faculty blame for lackluster academic performance and student disengagement.
"Campus Confidential:   Four top-tier B-schools don't disclose grades. Now that policy is under attack," Business Week, September 12, 2005 --- http://snipurl.com/BWSept122

Too Good to Grade:  How can these students get into doctoral programs and law schools if their prestigious universities will not disclose grades and class rankings?  Why grade at all in this case?
Students at some top-ranked B-schools have a secret. It's something they can't share even if it means losing a job offer. It's one some have worked hard for and should be proud of, but instead they keep it to themselves. The secret is their grades.
At four of the nation's 10 most elite B-schools -- including Harvard, Stanford, and Chicago -- students have adopted policies that prohibit them or their schools from disclosing grades to recruiters. The idea is to reduce competitiveness and eliminate the risk associated with taking difficult courses. But critics say the only thing nondisclosure reduces is one of the most important lessons B-schools should teach: accountability (see BusinessWeek, 9/12/05, "Join the Real World, MBAs"). It's a debate that's flaring up on B-school campuses across the country. (For more on this topic, log on to our B-Schools Forum.)  And nowhere is it more intense than at University of Pennsylvania's Wharton School, where students, faculty, and administrators have locked horns over a school-initiated proposal that would effectively end a decade of grade secrecy at BusinessWeek's No. 3-ranked B-school. It wouldn't undo disclosure rules but would recognize the top 25% of each class -- in effect outing everyone else. It was motivated, says Vice-Dean Anjani Jain in a recent Wharton Journal article, by the "disincentivizing effects" of grade nondisclosure, which he says faculty blame for lackluster academic performance and student disengagement.
"Campus Confidential:   Four top-tier B-schools don't disclose grades. Now that policy is under attack," Business Week, September 12, 2005 --- http://snipurl.com/BWSept122
Jensen Comment:  Talk about moral hazard.  What if 90% of the applicants claim to be  straight A graduates at the very top of the class, and nobody can prove otherwise?

September 2, 2005 message from Denny Beresford [DBeresford@TERRY.UGA.EDU]

Bob,

The impression I have (perhaps I'm misinformed) is that most MBA classes result in nearly all A's and B's to students. If that's the case, I wonder how much a grade point average really matters.

Denny Beresford
 

September 2, 2005 reply from Bob Jensen

One of the schools, Stanford, in the 1970s lived with the Van Horn rule that dictated no more than 15% A grades in any MBA class.  I guess grade inflation has hit the top business schools.  Then again, maybe the students are just better than we were.

I added the following to my Tidbit on this:

Talk about moral hazard.  What if 90% of the applicants claim to be  straight A graduates at the very top of the class, and nobody can prove otherwise?

After your message Denny, I see that perhaps it's not moral hazard.  Maybe 90% of the students actually get A grades in these business schools, in which nearly 90% would graduate summa cum laude. 

What a joke!  It must be nice teaching students who never hammer you on teaching evaluations because you gave them a C or below.

The crucial quotation is "faculty blame for lackluster academic performance and student disengagement."  Isn't this a laugh if they all get A and B grades for "lackluster academic performance and student disengagement."

I think these top schools are simply catering to their customers!

 Bob Jensen

Harvard Business School Eliminates Ban on a Graduate's Discretionary Disclosure of Grades
The era of the second-year slump at Harvard Business School is over. Or maybe the days of student cooperation are over. Despite strong student opposition, the business school announced Wednesday that it was ending its ban on sharing grades with potential employers. Starting with new students who enroll in the fall, M.B.A. candidates can decide for themselves whether to share their transcripts. The ban on grade-sharing has been enormously popular with students since it was adopted in 1998. Supporters say that it discouraged (or at least kept to a reasonable level) the kind of cut-throat competition for which business schools are known. With the ban, students said they were more comfortable helping one another or taking difficult courses. But a memo sent to students by Jay O. Light, the acting dean, said that the policy was wrong. “Fundamentally, I believe it is inappropriate for HBS to dictate to students what they can and cannot say about their grades during the recruiting process. I believe you and your classmates earn your grades and should be accountable for them, as you will be accountable for your performance in the organizations you will lead in the future,” he wrote.
Scott Jaschik, "Survival of the Fittest MBA," Inside Higher Ed, December 16, 2005 --- http://www.insidehighered.com/news/2005/12/16/grades

Bob Jensen's threads on Controversies in Higher Education are at http://www.trinity.edu/rjensen/HigherEdControversies.htm

 


Software for faculty and departmental performance evaluation and management

May 30, 2006 message from Ed Scribner [escribne@NMSU.EDU]

A couple of months ago I asked for any experiences with systems that collect faculty activity and productivity data for multiple reporting needs (AACSB, local performance evaluation, etc.). I said I'd get back to the list with a summary of private responses.

No one reported any significant direct experience, but many AECMers provided names and e-mail addresses of [primarily] associate deans who had researched products from Sedona and Digital Measures. Since my associate dean was leading the charge, I just passed those addresses on to her.

We ended up selecting Digital Measures mainly because of our local faculty input, the gist of which was that it had a more professional "feel." My recollection is that the risk of data loss with either system is acceptable and that the university "owns" the data. I understand that a grad student is entering our data from the past five years to get us started.

Ed Scribner
New Mexico State University
Las Cruces, NM, USA

Jensen Comment
The Digital Measures homepage is at http://www.digitalmeasures.com/

Over 100 universities use Digital Measures' customized solutions to connect administrators, faculty, staff, students, and alumni. Take a look at a few of the schools and learn more about Digital Measures.


Free from the Huron Consulting Group (Registration Required) --- http://www.huronconsultinggroup.com/

Effort Reporting Technology for Higher Education ---
http://www.huronconsultinggroup.com/uploadedFiles/ECRT_email.pdf

Question Mark (Software for Test and Tutorial Generation and Networking)
Barron's Home Page
Metasys Japan Software
Question Mark America home page
Using ExamProc for OMR Exam Marking
Vizija d.o.o. - Educational Programs - Wisdom Tools
Yahoo Links

TechKnowLogia --- http://www.techknowlogia.org/ 
TechKnowLogia is an international online journal that provides policy makers, strategists, practitioners and technologists at the local, national and global levels with a strategic forum to:
Explore the vital role of different information technologies (print, audio, visual and digital) in the development of human and knowledge capital;
Share policies, strategies, experiences and tools in harnessing technologies for knowledge dissemination, effective learning, and efficient education services;
Review the latest systems and products of technologies of today, and peek into the world of tomorrow; and
Exchange information about resources, knowledge networks and centers of expertise.

Bob Jensen's threads on education technologies are at http://www.trinity.edu/rjensen/000aaa/0000start.htm


"What's the Best Q&A Site?" by Wade Roush, MIT's Technology Review, December 22, 2006 --- http://www.technologyreview.com/InfoTech/17932/ 

Magellan Metasearch --- http://sourceforge.net/projects/magellan2/ 

Many educators would like to put more materials on the web, but they are concerned about protecting access to all or parts of documents.  For example, a professor may want to share a case with the world but limit the accompanying case solution to selected users.  Or a professor may want to make certain lecture notes available but limit the access of certain copyrighted portions to students in a particular course.   If protecting parts of your documents is of great interest, you may want to consider NetCloak from Maxum at http://www.maxum.com/ .  You can download a free trial version.

NetCloak Professional Edition combines the power of Maxum's classic combo, NetCloak and NetForms, into a single CGI application or WebSTAR API plug-in. With NetCloak Pro, you can use HTML forms on your web site to create or update your web pages on the fly. Or you can store form data in text files for importing into spreadsheets or databases off-line. Using NetCloak Pro, you can easily create online discussion forums, classified ads, chat systems, self-maintaining home pages, frequently-asked-question lists, or online order forms!

NetCloak Pro also gives your web site access to e-mail. Users can send e-mail messages via HTML forms, and NetCloak Pro can create or update web pages whenever an e-mail message is received by any e-mail address. Imagine providing HTML archives of your favorite mailing lists in minutes!

NetCloak Pro allows users to "cloak" pages individually or "cloak" individual paragraphs or text strings.  The level of security seems to be much higher than scripted passwords such as scripted passwords in JavaScript or VBScript.

Eric Press led me to http://www.maxum.com/NetCloak/FAQ/FAQList.html   (Thank you Eric, and thanks for the "two lunches")

Richard Campbell responded as follows:

Alternatives to using Netcloak: 1. Symantec http://www.symantec.com  has a free utility called Secret which will password-protect any type of file.

2. Winzip http://www.winzip.com  has a another shareware utility called Winzip - Self-Extractor, which has a password protect capability. The advantage to this approach is that you can bundle different file types (.doc, xls) , zip them and you can have them automatically install to a folder that you have named. If you have a shareware install utility that creates a setup.exe routine, you also can have it install automatically on the student's machine. The price of this product is about $30.

 


Full Disclosure to Consumers of Higher Education (including assessment of colleges and the Spellings Commission Report) --- http://www.trinity.edu/rjensen/HigherEdControversies.htm#FullDisclosure


Question
Guess which parents most strongly object to grade inflation?

Hint: Parents Say Schools Game System, Let Kids Graduate Without Skills

The Bredemeyers represent a new voice in special education: parents disappointed not because their children are failing, but because they're passing without learning. These families complain that schools give their children an easy academic ride through regular-education classes, undermining a new era of higher expectations for the 14% of U.S. students who are in special education. Years ago, schools assumed that students with disabilities would lag behind their non-disabled peers. They often were taught in separate buildings and left out of standardized testing. But a combination of two federal laws, adopted a quarter-century apart, have made it national policy to hold almost all children with disabilities to the same academic standards as other students.
John Hechinger and Daniel Golden, "Extra Help:  When Special Education Goes Too Easy on Students," The Wall Street Journal, August 21, 2007, Page A1 ---  http://online.wsj.com/article/SB118763976794303235.html?mod=todays_us_page_one

Bob Jensen's threads on grade inflation are at http://www.trinity.edu/rjensen/Assess.htm#GradeInflation

Bob Jensen's fraud updates are at http://www.trinity.edu/rjensen/FraudUpdates.htm

 


May 2, 2007 message from Carnegie President [carnegiepresident@carnegiefoundation.org]

A different way to think about ... accountability Alex McCormick's timely essay brings to our attention one of the most intriguing paradoxes associated with high-stakes measurement of educational outcomes. The more importance we place on going public with the results of an assessment, the higher the likelihood that the assessment itself will become corrupted, undermined and ultimately of limited value. Some policy scholars refer to the phenomenon as a variant of "Campbell's Law," named for the late Donald Campbell, an esteemed social psychologist and methodologist. Campbell stated his principle in 1976: "The more any quantitative social indicator is used for social decisionmaking, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor."

In the specific case of the Spellings Commission report, Alex points out that the Secretary's insistence that information be made public on the qualities of higher education institutions will place ever higher stakes on the underlying measurements, and that very visibility will attenuate their effectiveness as accountability indices. How are we to balance the public's right to know with an institution's need for the most reliable and valid information? Alex McCormick's analysis offers us another way to think about the issue.

Carnegie has created a forum—Carnegie Conversations—where you can engage publicly with the author and read and respond to what others have to say about this article at http://www.carnegiefoundation.org/perspectives/april2007 .

Or you may respond to Alex privately through carnegiepresident@carnegiefoundation.org .

If you would like to unsubscribe to Carnegie Perspectives, use the same address and merely type "unsubscribe" in the subject line of your email to us.

We look forward to hearing from you.

Sincerely,

Lee S. Shulman
President The Carnegie Foundation for the Advancement of Teaching

Jensen Comment
The fact that an assessment provides incentives to cheat is not a reason to not assess. The fact that we assign grades to students gives them incentives to cheat. That does not justify ceasing to assess, because the assessment process is in many instances the major incentive for a student to work harder and learn more. The fact that business firms have to be audited and produce financial statements provides incentives to cheat. That does not justify not holding business firms accountable. Alex McCormick's analysis and Shulman's concurrence is a bit one-sided in opposing the Spellings Commission recommendations.

Also see Full Disclosure to Consumers of Higher Education at http://www.trinity.edu/rjensen/HigherEdControversies.htm#FullDisclosure


School Assessment and College Admission Testing

July 25, 2006 query from Carol Flowers [cflowers@OCC.CCCD.EDU]

I am looking for a study that I saw. I was unsure if someone in this group had supplied the link, originally. It was a very honest and extremely comprehensive evaluation of higher education. In it, the

Higher Education Evaluation and Research Group was constantly quoted. But, what organizations it is affiliated with, I am unsure.

They commented on the lack of student academic preparedness in our educational system today along with other challenging areas that need to be addressed inorder to serve the population with which we now deal.

If anyone remembers such a report, please forward to me the url.

Thank You!

July 25, 2006 reply from Bob Jensen

Hi Carol,

I think the HEERG is affiliated with the Chancellor's Office of the California Community Colleges. It is primarily focused upon accountability  and assessment of these colleges.

HEERG --- http://snipurl.com/HEERG

Articles related to your query include the following:

Leopards in the Temple --- http://www.insidehighered.com/views/2006/06/12/caesar  

Accountability, Improvement and Money --- http://www.insidehighered.com/views/2005/05/03/lombardi

Grade Inflation and Abdication --- http://www.insidehighered.com/views/2005/06/03/lombardi

Students Read Less. Should We Care? --- http://www.insidehighered.com/views/2005/08/23/lombardi

Missing the Mark: Graduation Rates and University Performance --- http://www.insidehighered.com/views/2005/02/14/lombardi2


Assessment of Learning Achievements of College Graduates

"Getting the Faculty On Board," by Freeman A. Hrabowski III, Inside Higher Ed, June 23, 2006 --- http://www.insidehighered.com/views/2006/06/23/hrabowski

But as assessment becomes a national imperative, college and university leaders face a major challenge: Many of our faculty colleagues are skeptical about the value of external mandates to measure teaching and learning, especially when those outside the academy propose to define the measures. Many faculty members do not accept the need for accountability, but the assessment movement’s success will depend upon faculty because they are responsible for curriculum, instruction and research. All of us — policy makers, administrators and faculty — must work together to develop language, strategies and practices that help us appreciate one another and understand the compelling need for assessment — and why it is in the best interest of faculty and students.

Why is assessment important? We know from the work of researchers like Richard Hersh, Roger Benjamin, Mark Chun and George Kuh that college enrollment will be increasing by more than 15 percent nationally over the next 15 years (and in some states by as much as 50 percent). We also know that student retention rates are low, especially among students of color and low-income students. Moreover, of every 10 children who start 9th grade, only seven finish high school, five start college, and fewer than three complete postsecondary degrees. And there is a 20 percent gap in graduation rates between African Americans (42 percent) and whites (62 percent). These numbers are of particular concern given the rising higher education costs, the nation’s shifting demographics, and the need to educate more citizens from all groups.

At present, we do not collect data on student learning in a systematic fashion and rankings on colleges and universities focus on input measures, rather than on student learning in the college setting. Many people who have thought about this issue agree: We need to focus on “value added” assessment as an approach to determine the extent to which a university education helps students develop knowledge and skills. This approach entails comparing what students know at the beginning of their education and what they know upon graduating. Such assessment is especially useful when large numbers of students are not doing well — it can and should send a signal to faculty about the need to look carefully at the “big picture” involving coursework, teaching, and the level of support provided to students and faculty.

Many in the academy, however, continue to resist systematic and mandated assessment in large part because of problems they see with K-12 initiatives like No Child Left Behind — e.g., testing that focuses only on what can be conveniently measured, unacceptable coaching by teachers, and limiting what is taught to what is tested. Many academics believe that what is most valuable in the college experience cannot be measured during the college years because some of the most important effects of a college education only become clearer some time after graduation. Nevertheless, more institutions are beginning to understand that value-added assessment can be useful in strengthening teaching and learning, and even student retention and graduation rates.

It is encouraging that a number of institutions are interested in implementing value-added assessment as an approach to evaluate student progress over time and to see how they compare with other institutions. Such strategies are more effective when faculty and staff across the institution are involved. Examples of some best practices include the following:

  1. Constantly talking with colleagues about both the challenges and successful initiatives involving undergraduate education.
  2. Replicating successful initiatives (best practices from within and beyond the campus), in order to benefit as many students as possible.
  3. Working continuously to improve learning based on what is measured — from advising practices and curricular issues to teaching strategies — and making changes based on what we learn from those assessments.
  4. Creating accountability by ensuring that individuals and groups take responsibility for different aspects of student success.
  5. Recruiting and rewarding faculty who are committed to successful student learning (including examining the institutional reward structure).
  6. Taking the long view by focusing on initiatives over extended periods of time — in order to integrate best practices into the campus culture.

We in the academy need to think broadly about assessment. Most important, are we preparing our students to succeed in a world that will be dramatically different from the one we live in today? Will they be able to think critically about the issues they will face, working with people from all over the globe? It is understandable that others, particularly outside the university, are asking how we demonstrate that our students are prepared to handle these issues.

Assessment is becoming a national imperative, and it requires us to listen to external groups and address the issues they are raising. At the same time, we need to encourage and facilitate discussions among our faculty — those most responsible for curriculum, instruction, and research — to grapple with the questions of assessment and accountability. We must work together to minimize the growing tension among groups — both outside and inside the university — so that we appreciate and understand different points of view and the compelling need for assessment.

Bob Jensen's threads on controversies in higher education are at http://www.trinity.edu/rjensen/HigherEdControversies.htm

NCLB = No Child Left Behind Law
A September 2007 Thomas B. Fordham Institute report found NCLB's assessment system "slipshod" and characterized by "standards that are discrepant state to state, subject to subject, and grade to grade." For example, third graders scoring at the sixth percentile on Colorado's state reading test are rated proficient. In South Carolina the third grade proficiency cut-off is the sixtieth percentile.
Peter Berger, "Some Will Be Left Behind," The Irascible Professor, November 10, 2007 --- http://irascibleprofessor.com/comments-11-10-07.htm


"This is Only a Test," by Peter Berger, The Irascible Professor, December 5, 2005 --- http://irascibleprofessor.com/comments-12-05-05.htm

Back in 2002 President Bush predicted "great progress" once schools began administering the annual testing regime mandated by No Child Left Behind. Secretary of Education Rod Paige echoed the President's sentiments. According to Mr. Paige, anyone who opposed NCLB testing was guilty of "dismissing certain children" as "unteachable."

Unfortunately for Mr. Paige, that same week The New York Times documented "recent" scoring errors that had "affected millions of students" in "at least twenty states." The Times report offered a pretty good alternate reason for opposing NCLB testing. Actually, it offered several million pretty good alternate reasons.

Here are a few more.

There's nothing wrong with assessing what students have learned. It lets parents, colleges, and employers know how our kids are doing, and it lets teachers know which areas need more teaching. That's why I give quizzes and tests and one of the reasons my students write essays.

Of course, everybody who's been to school knows that some teachers are tougher graders than others. Traditional standardized testing, from the Iowa achievement battery to the SATs, was supposed to help us gauge the value of one teacher's A compared to another's. It provided a tool with which we could compare students from different schools.

This works fine as long as we recognize that all tests have limitations. For example, for years my students took a nationwide standardized social studies test that required them to identify the President who gave us the New Deal. The problem was the seventh graders who took the test hadn't studied U.S. history since the fifth grade, and FDR usually isn't the focus of American history classes for ten-year-olds. He also doesn't get mentioned in my eighth grade U.S. history class until May, about a month after eighth graders took the test.

In other words, wrong answers about the New Deal only meant we hadn't gotten there yet. That's not how it showed up in our testing profile, though. When there aren't a lot of questions, getting one wrong can make a surprisingly big difference in the statistical soup.

Multiply our FDR glitch by the thousands of curricula assessed by nationwide testing. Then try pinpointing which schools are succeeding and failing based on the scores those tests produce. That's what No Child Left Behind pretends to do.

Testing fans will tell you that cutting edge assessments have eliminated inconsistencies like my New Deal hiccup by "aligning" the tests with new state of the art learning objectives and grade level expectations. The trouble is these newly minted goals are often hopelessly vague, arbitrarily narrow, or so unrealistic that they're pretty meaningless. That's when they're not obvious and the same as they always were.

New objectives also don't solve the timing problem. For example, I don't teach poetry to my seventh grade English students. That's because I know that their eighth grade English teacher does an especially good job with it the following year, which means that by the time they leave our school, they've learned about poetry. After all, does it matter whether they learn to interpret metaphors when they're thirteen or they're fourteen as long as they learn it?

Should we change our program, which matches our staff's expertise, just to suit the test's arbitrary timing? If we don't, our seventh graders might not make NCLB "adequate yearly progress." If we do, our students likely won't learn as much.

Which should matter more?

Even if we could perfectly match curricula and test questions, modern assessments would still have problems. That's because most are scored according to guidelines called rubrics. Rubric scoring requires hastily trained scorers, who typically aren't teachers or even college graduates, to determine whether a student's essay "rambles" or "meanders." Believe it or not, that choice represents a twenty-five percent variation in the score. Or how about distinguishing between "appropriate sentence patterns" and "effective sentence structure," or language that's "precise and engaging" versus "fluent and original."

These are the flip-a-coin judgments at the heart of most modern assessments. Remember that the next time you read about which schools passed and which ones failed.

Unreliable scoring is one reason the General Accountability Office condemned data "comparisons between states" as "meaningless." It's why CTB/McGraw-Hill had to recall and rescore 120,000 Connecticut writing tests after the scores were released. It's why New York officials discarded the scores from its 2003 Regents math exam. A 2001 Brookings Institution study found that "fifty to eighty percent of the improvement in a school's average test scores from one year to the next was temporary" and "had nothing to do with long-term changes in learning or productivity." A senior RAND analyst warned that today's tests aren't identifying "good schools" and "bad schools." Instead, "we're picking out lucky and unlucky schools."

Students aren't the only victims of faulty scoring. Last year the Educational Testing Service conceded that more than ten percent of the candidates taking its 2003-2004 nationwide Praxis teacher licensing exam incorrectly received failing scores, which resulted in many of them not getting jobs. ETS attributed the errors to the "variability of human grading."

The New England Common Assessment Program, administered for NCLB purposes to all students in Vermont, Rhode Island, and New Hampshire, offers a representative glimpse of the cutting edge. NECAP is heir to all the standard problems with standardized test design, rubrics, and dubiously qualified scorers.

NECAP security is tight. Tests are locked up, all scrap paper is returned to headquarters for shredding, and testing scripts and procedures are painstakingly uniform. Except on the mathematics exam, each school gets to choose if its students can use calculators.

Whether or not you approve of calculators on math tests, how can you talk with a straight face about a "standardized" math assessment if some students get to use them and others don't? Still more ridiculous, there's no box to check to show whether you used one or not, so the scoring results don't even differentiate between students and schools that did and didn't.

Finally, guess how NECAP officials are figuring out students' scores. They're asking classroom teachers. Five weeks into the year, before we've even handed out a report card to kids we've just met, we're supposed to determine each student's "level of proficiency" on a twelve point scale. Our ratings, which rest on distinguishing with allegedly statistical accuracy between "extensive gaps," "gaps," and "minor gaps," are a "critical piece" and "key part of the NECAP standard setting process."

Let's review. Because classroom teachers' grading standards aren't consistent enough from one school to the next, we need a standardized testing program. To score the standardized testing program, every teacher has to estimate within eight percentage points how much their students know so test officials can figure out what their scores are worth and who passed and who failed.

If that makes sense to you, you've got a promising future in education assessment. Unfortunately, our schools and students don't.


"College Board Asks Group Not to Post Test Analysis," by Diana Jean Schemol, The New York Times, December 4, 2004 --- http://www.nytimes.com/2004/12/04/education/04college.html?oref=login 

The College Board, which owns the SAT college entrance exam, is demanding that a nonprofit group critical of standardized tests remove from its Web site data that breaks down scores by race, income and sex.

The demand, in a letter to The National Center for Fair and Open Testing, also known as FairTest, accuses the group of infringing on the College Board's copyright.

"Unfortunately, your misuse overtly bypasses our ownership and significantly impacts the perceptions of students, parents and educators regarding the services we provide," the letter said.

The move by the College Board comes amid growing criticism of the exams, with more and more colleges and universities raising questions about their usefulness as a gauge of future performance and discarding them as requirements for admission. The College Board is overhauling parts of the exam and will be using a new version beginning in March

FairTest has led opposition to the exams, and releases the results to support its accusation of bias in the tests, a claim rejected by test makers, who contend the scores reflect true disparities in student achievement. FairTest posts the information in easily accessible charts, and Robert A. Schaeffer, its spokesman, said they were the Web site's most popular features.

In its response to the College Board letter, which FairTest posted on its Web site on Tuesday, the group said it would neither take down the data nor seek formal permission to use it. FairTest has been publicly showing the data for nearly 20 years, Mr. Schaeffer said, until now without objection from the testing company, which itself releases the data in annual reports it posts on its Web site.

"You can't copyright numbers like that," Mr. Schaeffer said. "It's all about public education and making the public aware of score gaps and the potential for bias in the exams."

Devereux Chatillon, a specialist on copyright law at Sonnenschein, Nath & Rosenthal in New York, said case law supported FairTest's position. "Facts are not copyrightable," Ms. Chatillon said. In addition, she said, while the College Board may own the exam, the real authors of the test results are those taking the exams.

Continued in article

2004 Senior Test Scores:  ACT --- http://www.fairtest.org/nattest/ACT%20Scores%202004%20Chart.pdf 

2004 Senior Test Scores:  SAT --- http://www.fairtest.org/nattest/SAT%20Scoresn%202004%20Chart.pdf 

Fair Test Reacts to the SAT Outcomes --- http://www.fairtest.org/univ/2004%20SAT%20Score%20Release.html 

Fair Test Home --- http://www.fairtest.org/ 

Jensen Comment:
If there is to be a test that sets apart students that demonstrate higher ability, motivation, and aptitude for college studies, how would it differ from the present Princeton tests that have been designed and re-designed over and over again?  I cannot find any Fair Test models of what such a test would look like.  One would assume that by its very name Fair Test still agrees that some test is necessary.   However, the group's position seems to be that no national test is feasible that will give the same means and standard deviations for all groups (males, females, and race categories).  Fair Test advocates "assessments based on students' actual performances, not one-shot, high-stakes exams."  

Texas has such a Fair Test system in place for admission to any state university.  The President of the University of Texas, however, wants the system to be modified since his top-rated institution is losing all of its admission discretion and may soon be overwhelmed with more admissions than can be seated in classrooms.  My module on this issue, which was a special feature on 60 Minutes from CBS, is at http://www.trinity.edu/rjensen/book04q4.htm#60Minutes 

The problem with performance-based systems (such as the requirement that any state university in Texas must accept any graduate in the top 10% of the graduating class from any Texas high school) is that high schools in the U.S. generally follow the same grading scale as Harvard University.  Most classes give over half the students A grades.  Some teachers give A grades just for attendance or effort apart from performance.  This means that when it comes to isolating the top 10% of each graduating class, we're talking in terms of Epsilon differences.  I hardly think Epsilon is a fair criterion for admission to college.  Also, as was pointed out on 60 Minutes, students with 3.9 grade averages from some high schools tend to score much lower than students with 3.0 grade averages from other high schools.  This might achieve better racial mix but hardly seems fair to the 3.0 student who was unfortunate enough to live near a high school having a higher proportion of top students.   That was the theme of the 60 Minutes CBS special contrasting a 3.9 low SAT student who got into UT versus a 3.0 student who had a high SAT but was denied admission to UT.

What we really need is to put more resources into fair chances for those who test poorly or happen to fall Epsilon below that hallowed 10% cut off. in a performance-based system.  This may entail more time and remedial effort on the part of students before or after entering college.  


Mount Holyoke Dumps the SAT
Mount Holyoke College, which decided in 2001 to make the SAT optional, is finding very little difference in academic performance between students who provided their test scores and those who didn't.  The women's liberal arts college is in the midst of one of the most extensive studies to date about the impact of dropping the SAT -- a research project financed with $290,000 from the Mellon Foundation.  While the study isn't complete, the college is releasing some preliminary results. So far, Mount Holyoke has found that there is a difference of 0.1 point in the grade-point average of those who do and do not submit SAT scores. That is equivalent to approximately one letter grade in one course over a year of study.  Those results are encouraging to Mount Holyoke officials about their decision in 2001.
Scott Jaschik, "Not Missing the SAT," Inside Higher Ed March 9, 2005 --- http://www.insidehighered.com/insider/not_missing_the_sat 
Jensen Comment:
These results differ from the experiences of the University of Texas system where grades and test scores differ greatly between secondary schools.   Perhaps Mount Holyoke is not getting applications from students in the poorer school districts.  See http://www.trinity.edu/rjensen/book04q4.htm#60Minutes 


Dangers of Self Assessment

My undergraduate students can’t accurately predict their academic performance or skill levels. Earlier in the semester, a writing assignment on study styles revealed that 14 percent of my undergraduate English composition students considered themselves “overachievers.” Not one of those students was receiving an A in my course by midterm. Fifty percent were receiving a C, another third was receiving B’s and the remainder had earned failing grades by midterm. One student wrote, “overachievers like myself began a long time ago.” She received a 70 percent on her first paper and a low C at midterm.
Shari Wilson, "Ignorant of Their Ignorance," Inside Higher Ed, November 16, 2006 --- http://www.insidehighered.com/views/2006/11/16/wilson
Jensen comment
This does not bode well for self assessment.


AICPA Educational Competency Assessment for Accounting Students

Educational Competency Assessment (ECA) Web Site --- http://www.aicpa-eca.org/
The AICPA recently won a National Association of Colleges and Employers (NACE) Excellence Award for Educational Programming for developing this ECA site to help accounting educators integrate the skill-based competencies needed by entry-level accounting professionals.

The AICPA provides this resource to help educators integrate the skills-based competencies needed by entry-level accounting professionals. These competencies, defined within the AICPA Core Competency Framework Project, have been derived from academic and professional competency models and have been widely endorsed within the academic community. Created by educators for educators, the evaluation and educational strategies resources on this site are offered for your use and adaptation.

The ECA site contains a LIBRARY that, in addition to the Core Competency Database and Education Strategies, provides information and guidance on Evaluating Competency Coverage and Assessing Student Performance.

To assist you as you assess student performance and evaluate competency coverage in your courses and programs, the ECA ORGANIZERS guide you through the process of gathering, compiling and analyzing evidence and data so that you may document your activities and progress in addressing the AICPA Core Competencies.


Online Education Effectiveness and Testing

Learning Effectiveness in Corporate Universities
A group of colleges that serve adult students on Monday formally announced their effort to measure and report their effectiveness, focusing on outcomes in specific programs. The initiative known as “Transparency by Design, on which Inside Higher Ed reported earlier, has grown to include a mix of 10 nonprofit and for-profit institutions: Capella University, Charter Oak State College, Excelsior College, Fielding Graduate University, Franklin University, Kaplan University, Regis University, Rio Salado College, Western Governors University, and Union Institute & University.
Inside Higher Ed, October 23, 2007 --- http://www.insidehighered.com/news/2007/10/23/qt


Question
What are some of the features of UserView from TechSmith for evaluating student learning

Some of the reviews of the revised “free” Sound Recorder in Windows Vista are negative. It’s good to learn that Richard Campbell is having a good experience with it when recording audio and when translating the audio into text files --- http://microsoft.blognewschannel.com/archives/2006/05/24/windows-vista-sound-recorder 

For those of you on older systems as well as Vista there is a free recorder called Audacity that I like --- http://audacity.sourceforge.net/ 
I really like Audacity. There are some Wiki tutorials at http://audacity.sourceforge.net/help/tutorials 
Some video tutorials are linked at http://youtube.com/results?search_query=audacity+tutorial&search=Search 

I have some dated threads on speech recognition at http://www.trinity.edu/rjensen/speech.htm  Mac users can find options at http://www.macspeech.com/ 

In addition, I like Camtasia (recording screen shots and camera video) and Dubit (for recording audio and editing audio) from TechSmith --- http://www.techsmith.com/ 
TechSmith   products are very good, but they are not free downloads.

UserView --- http://www.techsmith.com/uservue/features.asp 
TechSmith has a newer product called UserView that really sounds exciting, although I’ve not yet tried it. It allows you to view and record what is happening on someone else’s computer like a student’s computer. Multiple computers can be viewed at the same time. Images and text can be recorded. Pop-up comments can be inserted by the instructor to text written by students.

UserView can be used for remote testing!

Userview offers great hope for teaching disabled students such as sight and/or hearing impaired students --- http://www.trinity.edu/rjensen/000aaa/thetools.htm#Handicapped

 


Accounting Professors in Support of Online Testing That, Among Other Things, Reduces Cheating
These same professors became widely known for their advocacy of self-learning in place of lecturing

"In Support of the E-Test," by Elia Powers, Inside Higher Ed, August 29, 2007 --- http://www.insidehighered.com/news/2007/08/29/e_test

Critics of testing through the computer often argue that it’s difficult to tell if students are doing their own work. It’s also unclear to some professors whether using the technology is worth their while. A new study makes the argument that giving electronic tests can actually reduce cheating and save faculty time.

Anthony Catanach Jr. and Noah Barsky, both associate professors of accounting at the Villanova School of Business, came to that conclusion after speaking with faculty members and analyzing the responses of more than 100 students at Villanova and Philadelphia University. Both Catanach and Barsky teach a course called Principles of Managerial Accounting that utilizes the WebCT Vista e-learning platform. The professors also surveyed undergraduates at Philadelphia who took tests electronically.

The Villanova course follows a pattern of Monday lecture, Wednesday case assignment, Friday assessment. The first two days require in-person attendance, while students can check in Friday from wherever they are.

“It never used to make sense to me why at business schools you have Friday classes,” Catanach said. “As an instructor it’s frustrating because 30 percent of the class won’t show up, so you have to redo material. We said, how can we make that day not lose its effectiveness?”

The answer, he and Barsky determined, was to make all electronically submitted group work due on Fridays and have that be electronic quiz day. That’s where academic integrity came into play. Since the professors weren’t requiring students to be present to take the exams, they wanted to deter cheating. Catanach said programs like the one he uses mitigate the effectiveness of looking up answers or consulting friends.

In electronic form, questions are given to students in random order so that copying is difficult. Professors can change variables within a problem to make sure that each test is unique while also ensuring a uniform level of difficulty. The programs also measure how much time a student spends on each question, which could signal to an instructor that a student might have slowed to use outside resources. Backtracking on questions generally is not permitted. Catanach said he doesn’t pay much attention to time spent on individual questions. And since he gives his students a narrow time limit to finish their electronic quizzes, consulting outside sources would only lead students to be rushed by the end of the exam, he added.

Forty-five percent of students who took part in the study reported that the electronic testing system reduced the likelihood of their cheating during the course.

Stephen Satris, director of the Center for Academic Integrity at Clemson University, said he applauds the use of technology to deter academic dishonesty. Students who take these courses might think twice about copying or plagiarizing on other exams, he said.

“It’s good to see this program working,” Satris said. “It does an end run around cheating.”

The report also makes the case that both faculty and students save time with e-testing. Catanach is up front about the initial time investment: For instructors to make best use of the testing programs, they need to create a “bank” of exam questions and code them by topic, learning objectives and level of difficulty. That way, the program knows how to distribute questions. (He said instructors should budget roughly 10 extra hours per week during the course for this task.)

The payoff, he said, comes later in the term. In the study, professors reported recouping an average of 80 hours by using the e-exams. Faculty don’t have to hand-grade tests (that often being a deterrent for the Friday test, Catanach notes), and graduate students or administrative staff can help prepare the test banks, the report points out.

Since tests are taken from afar, class time can be used for other purposes. Students are less likely to ask about test results during sessions, the study says, because the computer program gives them immediate results and points to pages where they can find out why their answers were incorrect. Satris said this type of system likely dissuades students from grade groveling, because the explanations are all there on the computer. He said it also make sense in other ways.

“I like that professors can truly say, ‘I don’t know what’s going to be on the test. There’s a question bank; it’s out of my control,’ ” he said.

And then there’s the common argument about administrative efficiency: An institution can keep a permanent electronic record of its students.

Survey results showed that Villanova students, who Catanach said were more likely to have their own laptop computers and be familiar with e-technology, responded better to the electronic testing system than did students at Philadelphia, who weren’t as tech savvy. Both Catanach and Satris said the e-testing programs are not likely to excite English and philosophy professors, whose disciplines call for essay questions rather than computer-graded content.

From a testing perspective, Catanach said the programs can be most helpful for faculty with large classes who need to save time on grading. That’s why the programs have proven popular at community colleges in some of the larger states, he said.

“It works for almost anyone who wants to have periodic assessment,” he said. “How much does the midterm and final motivate students to keep up with material? It doesn’t. It motivates cramming. This is a tool to help students keep up with the material.”

August 29, 2007 reply from Stokes, Len [stokes@SIENA.EDU]

I am also a strong proponent of active learning strategies. I have the luxury of a small class size. Usually fewer than 30 so I can adapt my classes to student interaction and can have periodic assessment opportunities as it fits the flow of materials rather than the calendar. I still think a push toward smaller classes with more faculty face time is better than computer tests. One lecture and one case day does not mean active learning. It is better than no case days but it is still a lecture day. I don’t have real lecture days every day involves some interactive material from the students.

While I admit I can’t pick up all trends in grading the tests, but I do pick up a lot of things so I have tendency to have a high proportion of essays and small problems. I then try to address common errors in class and also can look at my approach to teaching the material.

Len

 

Bob Jensen attempts to make a case that self learning is more effective for metacognitive reasons --- http://www.trinity.edu/rjensen/265wp.htm
This document features the research of Tony Catanach, David Croll, Bob Grinaker, and  Noah Barsky.

Bob Jensen's threads on the myths of online education are at http://www.trinity.edu/rjensen/000aaa/thetools.htm#Myths


Barbara gave me permission to post the following message on March 15, 2006
My reply follows her message.

Professor Jensen:

I need your help in working with regulators who are uncomfortable with online education.

I am currently on the faculty at the University of Dallas in Irving, Texas and I abruptly learned yesterday that the Texas State Board of Public Accountancy distinguishes online and on campus offering of ethics courses that it approves as counting for students to meet CPA candidacy requirements. Since my school offers its ethics course in both modes, I am suddenly faced with making a case to the TSBPA in one week's time to avoid rejection of the online version of the University of Dallas course.

I have included in this email the "story" as I understand it that explains my situation. It isn't a story about accounting or ethics, it is a story about online education.

I would like to talk to you tomorrow because of your expertise in distance education and involvement in the profession. In addition, I am building a portfolio of materials this week for the Board meeting in Austin March 22-23 to make a case for their approval (or at least not rejection) of the online version of the ethics course that the Board already accepts in its on campus version. I want to include compelling research-based material demonstrating the value of online learning, and I don't have time to begin that literature survey myself. In addition, I want to be able to present preliminary results from reviewers of the University of Dallas course about the course's merit in presentation of the content in an online delivery.

Thank you for any assistance that you can give me.

Barbara W. Scofield
Associate Professor of Accounting
University of Dallas
1845 E Northgate Irving, TX 75062
972-721-5034

scofield@gsm.udallas.edu

A statement of the University of Dallas and Texas State Board of Public Accountancy and Online Learning

The TSBPA approved the University of Dallas ethics program in 2004. The course that was approved was a long-standing course, required in several different graduate programs, called Business Ethics. The course was regularly taught on campus (since 1995) and online (since 2001).

The application for approval of the ethics course did not ask for information about whether the class was on campus or online and the syllabus that was submitted happened to be the syllabus of an on campus section. The TSBPA's position (via Donna Hiller) is that the Board intended to approve only the on campus version of the course, and that the Board inferred it was an on campus course because the sample syllabus that was submitted was an on campus course.

Therefore the TSBPA (via Donna Hiller) is requiring that University of Dallas students who took the online version of the ethics course retake the exact same course in its on campus format. While the TSBPA (via Donna Hiller) has indicated that the online course cannot at this time be approved and its scheduled offering in the summer will not provide students with an approved course, Donna Hiller, at my request, has indicated that she will take this issue to the Board for their decision next week at the Executive Board Meeting on March 22 and the Board Meeting on March 23.

There are two issues:

1. Treatment of students who were relying on communication from the Board at the time they took the class that could reasonably have been interpreted to confer approval of both the online and on campus sections of the ethics course.

2. Status of the upcoming summer online ethics class.

My priority is establishing the status of the upcoming summer online ethics class. The Board has indicated through its pilot program with the University of Texas at Dallas that there is a place for online ethics classes in the preparation of CPA candidates. The University of Dallas is interested in providing the TSBPA with any information or assessment necessary to meet the needs of the Board to understand the online ethics class at the University of Dallas. Although not currently privy to the Board specific concerns about online courses, the University of Dallas believes that it can demonstrate sufficient credibility for the course because of the following factors:

A. The content of the online course is the same as the on campus course. Content comparison can be provided. B. The instructional methods of the online course involve intense student-to-student, instructor-to-student, and student-to-content interaction at a level equivalent to an on campus course. Empirical information about interaction in the course can be provided.

C. The instructor for the course is superbly qualified and a long-standing ethics instructor and distance learning instructor. The vita of the instructor can be provided.

D. There are processes for course assessment in place that regularly prompt the review of this course and these assessments can be provided to the board along with comparisons with the on campus assessments.

E. The University of Dallas will seek to coordinate with the work done by the University of Texas at Dallas to provide information at least equivalent to that provided by the University of Texas at Dallas and to meet at a minimum the tentative criteria for online learning that UT Dallas has been empowered to recommend to the TSBPA. Contact with the University of Texas at Dallas has been initiated.

When the online ethics course is granted a path to approval by the Board, I am also interested in addressing the issue of TSBPA approval of students who took the class between the original ethics course approval date and March 13, 2006, the date that the University of Dallas became aware of the TSBPA intent (through Donna Hiller) that the TSBPA distinguished online and on campus ethics classes.

The University of Dallas believes that the online class in fact provided these students with a course that completely fulfilled the general intent of the Board for education in ethics, since it is the same course as the approved on campus course (see above). The decision on the extent of commitment of the Board to students who relied on the Board's approval letter may be a legal issue of some sort that is outside of the current decision-making of the Board, but I want the Board take the opportunity to consider that the reasonableness of the students' position and the students' actual preparation in ethics suggest that there should also be a path created to approval of online ethics courses taken at the University of Dallas during this prior time period. The currently proposed remedy of a requirement for students to retake the very same course on campus that students have already taken online appears excessively costly to Texans and the profession of accounting by delaying the entry of otherwise qualified individuals into public accountancy. High cost is justified when the concomitant benefits are also high. However, the benefit to Texans and the accounting profession from students who retake the ethics course seems to exist only in meeting the requirements of regulations that all parties diligently sought to meet in the first place and not in producing any actual additional learning experiences.

A reply to her from Bob Jensen

Hi Barbara,

May I share your questions and my responses in the next edition of New Bookmarks? This might be helpful to your efforts when others become informed. I will be in my office every day except for March 17. My phone number is 210-999-7347. However, I can probably be more helpful via email.

As discouraging as it may seem, if students know what is expected of them and must demonstrate what they have learned, pedagogy does not seem to matter. It can be online or onsite. It can be lecture or cases. It can be no teaching at all if there are talented and motivated students who are given great learning materials. This is called the well-known “No Significant Difference” phenomenon --- http://www.nosignificantdifference.org/

I think you should stress that insisting upon onsite courses is discriminatory against potential students whose life circumstances make it difficult or impossible to attend regular classes on campus.

I think you should make the case that online education is just like onsite education in the sense that learning depends on the quality and motivations of the students, faculty, and university that sets the employment and curriculum standards for quality. The issue is not onsite versus online. The issue is quality of effort.

The most prestigious schools like Harvard and Stanford and Notre Dame have a large number of credit and non-credit courses online. Entire accounting undergraduate and graduate degree programs are available online from such quality schools as the University of Wisconsin and the University of Maryland.  See my guide to online training and education programs is at http://www.trinity.edu/rjensen/crossborder.htm

My main introductory document on the future of distance education is at http://www.trinity.edu/rjensen/000aaa/updateee.htm

Anticipate and deal with the main arguments against online education. The typical argument is that onsite students have more learning interactions with themselves and with the instructor. This is absolutely false if the distance education course is designed to promote online interactions that do a better job of getting into each others’ heads.  Online courses become superior to onsite courses.

Amy Dunbar teaches intensely interactive online courses with Instant Messaging. See Dunbar, A. 2004. “Genesis of an Online Course.” Issues in Accounting Education (2004),19 (3):321-343.

ABSTRACT: This paper presents a descriptive and evaluative analysis of the transformation of a face-to-face graduate tax accounting course to an online course. One hundred fifteen students completed the compressed six-week class in 2001 and 2002 using WebCT, classroom environment software that facilitates the creation of web-based educational environments. The paper provides a description of the required technology tools and the class conduct. The students used a combination of asynchronous and synchronous learning methods that allowed them to complete the coursework on a self-determined schedule, subject to semi-weekly quiz constraints. The course material was presented in content pages with links to Excel® problems, Flash examples, audio and video files, and self-tests. Students worked the quizzes and then met in their groups in a chat room to resolve differences in answers. Student surveys indicated satisfaction with the learning methods.

I might add that Amy is a veteran world class instructor both onsite and online. She’s achieved all-university awards for onsite teaching in at least three major universities. This gives her the credentials to judge how well her online courses compare with her outstanding onsite courses.

A free audio download of a presentation by Amy Dunbar is available at
http://www.cs.trinity.edu/~rjensen/002cpe/02start.htm#2002   

The argument that students cannot be properly assessed for learning online is more problematic. Clearly it is easier to prevent cheating with onsite examinations. But there are ways of dealing with this problem.  My best example of an online graduate program that is extremely difficult is the Chartered Accountant School of Business (CASB) masters program for all of Western Canada. Students are required to take some onsite testing even though this is an online degree program. And CASB does a great job with ethics online. I was engaged to formally assess this program and came away extremely impressed. My main contact there is Don Carter carter@casb.com  .  If you are really serious about this, I would invite Don to come down and make a presentation to the Board. Don will convince them of the superiority of online education.

You can read some about the CASB degree program at http://www.casb.com/

You can read more about assessment issues at http://www.trinity.edu/rjensen/assess.htm

I think a lot of the argument against distance education comes from faculty fearful of one day having to teach online. First there is the fear of change. Second there is the genuine fear that is entirely justified --- if online teaching is done well it is more work and strain than onsite teaching. The strain comes from increased hours of communication with each and every student.

Probably the most general argument in favor of onsite education is that students living on campus have the social interactions and maturity development outside of class. This is most certainly a valid argument. However, when it comes to issues of learning of course content, online education can be as good as or generally better than onsite classes. Students in online programs are often older and more mature such that the on-campus advantages decline in their situations. Online students generally have more life, love, and work experiences already under their belts. And besides, you’re only talking about ethics courses rather than an entire undergraduate or graduate education.

I think if you deal with the learning interaction and assessment issues that you can make a strong case for distance education. There are some “dark side” arguments that you should probably avoid. But if you care to read about them, go to http://www.trinity.edu/rjensen/000aaa/theworry.htm

Bob Jensen

March 15, 2006 reply from Bruce Lubich [BLubich@UMUC.EDU]

Bob, as a director and teacher in a graduate accounting program that is exclusively online, I want to thank you for your support and eloquent defense of online education. Unfortunately, Texas's predisposition against online teaching also shows up in its education requirements for sitting for the CPA exam. Of the 30 required upper division accounting credits, at least 15 must "result from physical attendance at classes meeting regularly on the campus" (quote from the Texas State Board of Public Accountancy website at www.tsbpa.state.tx.us/eq1.htm)

Cynically speaking, it seems the state of Texas wants to be sure its classrooms are occupied.

Barbara, best of luck with your testimony.

Bruce Lubich
Program Director,
Accounting Graduate School of Management and Technology
University of Maryland University College

March 15, 2006 reply from David Albrecht [albrecht@PROFALBRECHT.COM]

At my school, Bowling Green, student credits for on-line accounting majors classes are never approved by the department chair. He says that you can't trust the schools that are offering these. When told that some very reputable schools are offering the courses, he still says no because when the testing process is done on-line or not in the physical presence of the professor the grades simply can't be trusted.

David Albrecht

March 16, 2006 reply from Bob Jensen

Hi David,

One tack against a luddites like that is to propose a compromise that virtually accepts all transfer credits from AACSB-accredited universities. It's difficult to argue that standards vary between online and onsite courses in a given program accredited by the AACSB. I seriously doubt that the faculty in that program would allow a double academic standard.

In fact, on transcripts it is often impossible to distinguish online from onsite credits from a respected universities, especially when the same course is offered online and onsite (i.e., merely in different sections).

You might explain to your department chair that he's probably been accepting online transfer credits for some time. The University of North Texas and other major universities now offer online courses to full-time resident students who live on campus. Some students and instructors find this to be a better approach to learning.

And you ask him why Bowling Green's assessment rigor is not widely known to be vastly superior to online courses from nearly all major universities that now offer distance education courses and even total degree programs, including schools like the Fuqua Graduate School at Duke, Stanford University (especially computer science and engineering online courses that bring in over $100 million per year), the University of Maryland, the University of Wisconsin, the University of Texas, Texas Tech, and even, gasp, The Ohio State University.

You might tell your department chair that by not offering some online alternatives, Bowling Green is not getting the most out of its students. The University of Illinois conducted a major study that found that students performed better in online versus onsite courses when matched pair sections took the same examinations.

And then you might top it off by asking your department chair how he justifies denying credit for Bowling Green's own distance education courses --- http://adultlearnerservices.bgsu.edu/index.php?x=opportunities 
The following is a quotation from the above Bowling Green site:

*****************************
The advancement of computer technology has provided a wealth of new opportunities for learning. Distance education is one example of technology’s ability to expand our horizons and gain from new experiences. BGSU offers many distance education courses and two baccalaureate degree completion programs online.

The Advanced Technological Education Degree Program is designed for individuals who have completed a two-year applied associate’s degree. The Bachelor of Liberal Studies Degree Program is ideal for students with previous college credit who would like flexibility in course selection while completing a liberal education program.

Distance Education Courses and Programs --- http://ideal.bgsu.edu/ONLINE/  ***************************

Bob Jensen

March 16, 2006 reply from Amy Dunbar [Amy.Dunbar@BUSINESS.UCONN.EDU]

Count me in the camp that just isn't that concerned about online cheating. Perhaps that is because my students are graduate students and my online exams are open-book, timed exams, and a different version is presented to each student (much like a driver's license exam). In my end-of-semester survey, I ask whether students are concerned about cheating, and on occasion, I get one who is. But generally the response is no.

The UConn accounting department was just reviewed by the AACSB, and they were impressed by our MSA online program. They commented that they now believed that an online MSA program was possible. I am convinced that the people who are opposed to online education are unwilling to invest the time to see how online education is implemented. Sure there will be bad examples, but there are bad examples of face to face (FTF) teaching. How many profs do you know who simply read powerpoint slides to a sleeping class?! Last semester, I received the School of Business graduate teaching award even though I teach only online classes. I believe that the factor that really matters is that the students know you care about whether they are learning. A prof who cares interacts with students. You can do that online as well as FTF.

Do I miss FTF teaching -- you bet I do. But once I focused on what the student really needs to learn, I realized, much to my dismay, interacting FTF with Dunbar was not a necessary condition.

Amy Dunbar

March 16, 2006 message from Carol Flowers [cflowers@OCC.CCCD.EDU]

To resolve this issue and make me more comfortable with the grade a student earns, I have all my online exams proctored. I schedule weekends (placing them in the schedule of classes) and it is mandatory that they take the exams during this weekend period (Fir/Sat) at our computing center. It is my policy that if they can't take the paced exams during those periods, then the class is not one that they can participate in. This is no different from having different times that courses are offered. They have to make a choice in that situation, also, as to which time will best serve their needs.

March 16, 2006 reply from David Fordham, James Madison University [fordhadr@JMU.EDU]

Our model is similar to Carol Flowers. Our on-line MBA program requires an in-person meeting for four hours at the beginning of every semester, to let the students and professor get to know each other personally, followed by the distance-ed portion, concluding with another four-hour in- person session for the final examination or other assessment. The students all congregate at the Sheraton at Dulles airport, have dinner together Friday night, spend Saturday morning taking the final for their previous class, and spend Saturday afternoon being introduced to their next class. They do this between every semester. So far, the on- line group has outperformed (very slightly, and not statistically significant due to small sample sizes) the face-to-face counterparts being used as our control groups. We believe the outperformance might have an inherent self- selection bias since the distance-learners are usually professionals, whereas many of our face-to-face students are full-time students and generally a bit younger and more immature.

My personal on-line course consists of exactly the same readings as my F2F class, and exactly the same lectures (recorded using Tegrity) provided on CD and watched asynchronously, followed by on-line synchronous discussion sessions (2-3 hours per week) where I call on random students asking questions about the readings, lectures, etc., and engaging in lively discussion. I prepare some interesting cases and application dilemmas (mostly adapted from real world scenarios) and introduce dilemmas, gray areas, controversy (you expected maybe peace and quiet from David Fordham?!), and other thought-provoking issues for discussion. I have almost perfect attendance in the on-line synchronous because the students really find the discussions engaging. Surprisingly, I have no problem with freeloaders who don't read or watch the recorded lectures. My major student assessment vehicle is an individual policy manual, supplemented by the in-person exam. Since each student's manual organization, layout, approach, and perspective is so very different from the others, cheating is almost out of the question. And the in-person exam is conducted almost like the CISP or old CPA exams... total quiet, no talking, no leaving the room, nothing but a pencil, etc.

And finally, no, you can't tell the difference on our student's transcript as to whether they took the on-line or in-person MBA. They look identical on the transcript.

We've not yet had any problem with anyone "rejecting" our credential that I'm aware of.

Regarding our own acceptance of transfer credit, we make the student provide evidence of the quality of each course (not the degree) before we exempt or accept credit. We do not distinguish between on-line or F2F -- nor do we automatically accept a course based on institution reputation. We have on many occasions rejected AACSB- accredited institution courses (on a course-by-course basis) because our investigation showed that the course coverage or rigor was not up to the standard we required. (The only "blanket" exception that we make is for certain familiar Virginia community college courses in the liberal studies where history has shown that the college and coursework reliably meets the standards -- every other course has to be accepted on a course-by-course basis.)

Just our $0.02 worth.

David Fordham
James Madison University


DOES DISTANCE LEARNING WORK?
A LARGE SAMPLE, CONTROL GROUP STUDY OF STUDENT SUCCESS IN DISTANCE LEARNING
by James Koch --- http://www.usq.edu.au/electpub/e-jist/docs/vol8_no1/fullpapers/distancelearning.htm

The relevant public policy question is this---Does distance learning "work" in the sense that students experience as least as much success when they utilize distance learning modes as compared to when they pursue conventional bricks and mortar education? The answer to this question is a critical in determining whether burgeoning distance learning programs are cost-effective investments, either for students, or for governments.

Of course, it is difficult to measure the "learning" in distance learning, not the least because distance learning courses now span nearly every academic discipline. Hence, most large sample evaluative studies utilize students’ grades as an imperfect proxy for learning. That approach is followed in the study reported here, as well.

A recent review of research in distance education reported that 1,419 articles and abstracts appeared in major distance education journals and as dissertations during the 1990-1999 period (Berge and Mrozowski, 2001). More than one hundred of these studies focused upon various measures of student success (such as grades, subsequent academic success, and persistence) in distance learning courses. Several asked the specific question addressed in this paper: Why do some students do better than others, at least as measured by the grade they receive in their distance learning course? A profusion of contradictory answers has emanated from these studies (Berge and Mrozowski, 2001; Machtmes and Asher, 2000). It is not yet clear how important to individual student success are factors such as the student’s characteristics (age, ethnic background, gender, academic background, etc.). However, other than knowing that experienced faculty are more effective than less experienced faculty (Machtmes and Asher, 2000), we know even less about how important the characteristics of distance learning faculty are to student success, particularly where televised, interactive distance learning is concerned.

Perhaps the only truly strong conclusion emerging from previous empirical studies of distance learning is the oft cited "no significant difference" finding (Saba, 2000). Indeed, an entire web site, http://teleeducation.nb.ca/nosignificantdifference, exists that reports 355 such "no significant difference" studies. Yet, without quarreling with such studies, they do not tell us why some students achieve better grades than others when they utilize distance learning.

Several studies have suggested that student learning styles and receptivity to distance learning influence student success (see Taplin and Jegede, 2001, for a short survey). Unfortunately, as Maushak et. al. (2001) point out, these intuitively sensible findings are not yet highly useful, because they are not based upon large sample, control group evidence that relates recognizable student learning styles to student performance. Studies that rely upon "conversation and discourse analysis" (Chen and Willits, 1999, provide a representative example) and interviews with students are helpful, yet are sufficiently anecdotal that they are unlikely to lead us to scientifically based conclusions about what works and what does not.

This paper moves us several steps forward in terms of our knowledge by means of a very large distance education sample (76,866 individual student observations) and an invaluable control group of students who took the identical course at the same time from the same instructor, but did so "in person" in a conventional "bricks and mortar" location. The results indicate that gender, age, ethnic background, distance learning experience, experience with the institution providing the instruction, and measures of academic aptitude and previous academic success are statistically significant determinants of student success. Similarly, faculty characteristics such as gender, age, ethnic background, and educational background are statistically significant predictors of student success, though not necessarily in the manner one might hypothesize.

Continued in this working paper


January 6, 2006 message from Carolyn Kotlas [kotlas@email.unc.edu]

No Significant Difference Phenomenon website http://www.nosignificantdifference.org/ 

The website is a companion piece to Thomas L. Russell's book THE NO SIGNIFICANT DIFFERENCE PHENOMENON, a bibliography of 355 research reports, summaries, and papers that document no significant differences in student outcomes between alternate modes of education delivery.


DISTANCE LEARNING AND FACULTY CONCERNS

Despite the growing number of distance learning programs, faculty are often reluctant to move their courses into the online medium. In "Addressing Faculty Concerns About Distance Learning" (ONLINE JOURNAL OF DISTANCE LEARNING ADMINISTRATION, vol. VIII, no. IV, Winter 2005) Jennifer McLean discusses several areas that influence faculty resistance, including: the perception that technical support and training is lacking, the fear of being replaced by technology, and the absence of a clearly-understood institutional vision for distance learning. The paper is available online at
http://www.westga.edu/%7Edistance/ojdla/winter84/mclean84.htm

The Online Journal of Distance Learning Administration is a free, peer-reviewed quarterly published by the Distance and Distributed Education Center, The State University of West Georgia, 1600 Maple Street, Carrollton, GA 30118 USA; Web: http://www.westga.edu/~distance/jmain11.html

Bob Jensen's threads on faculty concerns are at http://www.trinity.edu/rjensen/assess.htm

Also see Bob Jensen's threads on the dark side at http://www.trinity.edu/rjensen/000aaa/theworry.htm


 .QUESTIONING THE VALUE OF LEARNING TECHNOLOGY

"The notion that the future of education lies firmly in learning technology, seen as a tool of undoubted magnitude and a powerful remedy for many educational ills, has penetrated deeply into the psyche not only of those involved in delivery but also of observers, including those in power within national governments." In a paper published in 1992, Gabriel Jacobs expressed his belief that hyperlink technology would be a "teaching resource that would transform passive learners into active thinkers." In "Hypermedia and Discovery Based Learning: What Value?" (AUSTRALASIAN JOURNAL OF EDUCATIONAL TECHNOLOGY, vol. 21, no. 3, 2005, pp. 355-66), he reconsiders his opinions, "the result being that the guarded optimism of 1992 has turned to a deep pessimism." Jacob's paper is available online at http://www.ascilite.org.au/ajet/ajet21/jacobs.html .

The Australasian Journal of Educational Technology (AJET) [ISSN 1449-3098 (print), ISSN 1449-5554 (online)], published three times a year, is a refereed journal publishing research and review articles in educational technology, instructional design, educational applications of computer technologies, educational telecommunications, and related areas. Back issues are available on the Web at no cost. For more information and back issues go to http://www.ascilite.org.au/ajet/ajet.html .

See Bob Jensen's threads on the dark side at http://www.trinity.edu/rjensen/000aaa/theworry.htm


June 1, 2007 message from Carolyn Kotlas [kotlas@email.unc.edu]

TEACHING THE "NET GENERATION"

The April/May 2007 issue of INNOVATE explores and explains the learning styles and preferences of Net Generation learners. "Net Generation learners are information seekers, comfortable using technology to seek out information, frequently multitasking and using multiple forms of media simultaneously. As a result, they desire independence and autonomy in their learning processes."

Articles include:

"Identifying the Generation Gap in Higher Education: Where Do theDifferences Really Lie?"
by Paula Garcia and Jingjing Qin, Northern Arizona University

"MyLiteracies: Understanding the Net Generation through LiveJournals and Literacy Practices"
by Dana J. Wilber, Montclair State University

"Is Education 1.0 Ready for Web 2.0 Students?"
by John Thompson,Buffalo State College

The issue is available at http://innovateonline.info/index.php.

Registration is required to access articles; registration is free.

Innovate: Journal of Online Education [ISSN 1552-3233], an open-access, peer-reviewed online journal, is published bimonthly by the Fischler School of Education and Human Services at Nova Southeastern University.

The journal focuses on the creative use of information technology (IT) to enhance educational processes in academic, commercial, and governmental settings. For more information, contact James L. Morrison, Editor-in-Chief; email: innovate@nova.edu ;
Web:  http://innovateonline.info/.

The journal also sponsors Innovate-Live webcasts and discussion forums that add an interactive component to the journal articles. To register for these free events, go to http://www.uliveandlearn.com/PortalInnovate/.

See also:

"Motivating Today's College Students"
By Ian Crone
PEER REVIEW, vol. 9, no. 1, Winter 2007

http://www.aacu.org/peerreview/pr-wi07/pr-wi07_practice.cfm

Peer Review, published quarterly by the Association of American Colleges and Universities (AACU), provides briefings on "emerging trends and key debates in undergraduate liberal education. Each issue is focused on a specific topic, provides comprehensive analysis, and highlights changing practice on diverse campuses." For more information, contact: AACU, 1818 R Street NW, Washington, DC 20009 USA;

tel: 202-387-3760; fax: 202-265-9532;
Web: 
http://www.aacu.org/peerreview/.

For a perspective on educating learners on the other end of the generational continuum see:

"Boomer Reality"
By Holly Dolezalek
TRAINING, vol. 44, no. 5, May 2007

http://www.trainingmag.com/msg/content_display/publications/e3if330208bec8f4014fac339db9fd0678e

Training [ISSN 0095-5892] is published monthly by Nielsen Business Media, Inc., 770 Broadway, New York, NY 10003-9595 USA;
tel: 646-654-4500; email:
bmcomm@nielsen.com ;
Web:  http://www.trainingmag.com.

Bob Jensen's threads on learning can be found at the following Web sites:

http://www.trinity.edu/rjensen/assess.htm

http://www.trinity.edu/rjensen/255wp.htm

http://www.trinity.edu/rjensen/265wp.htm

http://www.trinity.edu/rjensen/000aaa/0000start.htm

 


June 1, 2007 message from Carolyn Kotlas [kotlas@email.unc.edu]

TECHNOLOGY AND CHANGE IN EDUCATIONAL PRACTICE

"Even if research shows that a particular technology supports a certain kind of learning, this research may not reveal the implications of implementing it. Without appropriate infrastructure or adequate provisions of services (policy); without the facility or ability of teachers to integrate it into their teaching practice (academics); without sufficient support from technologists and/or educational technologists (support staff), the likelihood of the particular technology or software being educationally effective is questionable."

The current issue (vol. 19, no. 1, 2007) of the JOURNAL OF EDUCATIONAL TECHNOLOGY & SOCIETY presents a selection of papers from the Conference Technology and Change in Educational Practice which was held at the London Knowledge Lab, Institute of Education, London in October 2005.

The papers cover three areas: "methodological frameworks, proposing new ways of structuring effective research; empirical studies, illustrating the ways in which technology impacts the working roles and practices in Higher Education; and new ways of conceptualising technologies for education."

Papers include:

"A Framework for Conceptualising the Impact of Technology on Teaching and Learning"
by Sara Price and Martin Oliver, London Knowledge Lab, Institute of Education

"New and Changing Teacher Roles in Higher Education in a Digital Age"
by Jo Dugstad Wake, Olga Dysthe, and Stig Mjelstad, University of Bergen

"Academic Use of Digital Resources: Disciplinary Differences and the Issue of Progression Revisited"
by Bob Kemp, Lancaster University, and Chris Jones, Open University

"The Role of Blogs In Studying the Discourse and Social Practices of Mathematics Teachers"
by Katerina Makri and Chronis Kynigos, University of Athens

The issue is available at http://www.ifets.info/issues.php?show=current.

The Journal of Educational Technology and Society [ISSN 1436-4522]is a peer-reviewed, quarterly publication that "seeks academic articles on the issues affecting the developers of educational systems and educators who implement and manage such systems." Current and back issues are available at http://www.ifets.info/. The journal is published by the International Forum of Educational Technology & Society. For more information, see http://ifets.ieee.org/.

Bob Jensen's threads on blogs and listservs are at http://www.trinity.edu/rjensen/ListservRoles.htm

Bob Jensen's threads on education technologies are at http://www.trinity.edu/rjensen/000aaa/0000start.htm

Bob Jensen's threads on distance education and training alternatives are at http://www.trinity.edu/rjensen/Crossborder.htm


Civil Rights Groups That Favor Standardized Testing

"Teachers and Rights Groups Oppose Education Measure ," by Diana Jean Schemo, The New York Times, September 11, 2007 --- http://www.nytimes.com/2007/09/11/education/11child.html?_r=1&oref=slogin

The draft House bill to renew the federal No Child Left Behind law came under sharp attack on Monday from civil rights groups and the nation’s largest teachers unions, the latest sign of how difficult it may be for Congress to pass the law this fall.

At a marathon hearing of the House Education Committee, legislators heard from an array of civil rights groups, including the Citizens’ Commission on Civil Rights, the National Urban League, the Center for American Progress and Achieve Inc., a group that works with states to raise academic standards.

All protested that a proposal in the bill for a pilot program that would allow districts to devise their own measures of student progress, rather than using statewide tests, would gut the law’s intent of demanding that schools teach all children, regardless of poverty, race or other factors, to the same standard.

Dianne M. Piché, executive director of the Citizens’ Commission on Civil Rights, said the bill had “the potential to set back accountability by years, if not decades,” and would lead to lower standards for children in urban and high poverty schools.

“It strikes me as not unlike allowing my teenage son and his friends to score their own driver’s license tests,” Ms. Piché said, adding, “We’ll have one set of standards for the Bronx and one for Westchester County, one for Baltimore and one for Bethesda.”

Continued in article

 


What works in education?

As I said previously, great teachers come in about as many varieties as flowers.  Click on the link below to read about some of the varieties recalled by students from their high school days.  I t should be noted that "favorite teacher" is not synonymous with "learned the most."  Favorite teachers are often great at entertaining and/or motivating.  Favorite teachers often make learning fun in a variety of ways.  

However, students may actually learn the most from pretty dull teachers with high standards and demanding assignments and exams.  Also dull teachers may also be the dedicated souls who are willing to spend extra time in one-on-one sessions or extra-hour tutorials that ultimately have an enormous impact on mastery of the course.  And then there are teachers who are not so entertaining and do not spend much time face-to-face that are winners because they have developed learning materials that far exceed other teachers in terms of student learning because of those materials.  

The recollections below tend to lean toward entertainment and "fun" teachers, but you must keep in mind that these were written after-the-fact by former high school teachers.  In high school, dull teachers tend not to be popular before or after the fact.  This is not always the case when former students recall their college professors.


"'A dozen roses to my favorite teacher," The Philadelphia Enquirer, November 30, 2004 --- http://www.philly.com/mld/inquirer/news/special_packages/phillycom_teases/10304831.htm?1c 

January 6, 2006 message from Carolyn Kotlas [kotlas@email.unc.edu]

No Significant Difference Phenomenon website http://www.nosignificantdifference.org/ 

The website is a companion piece to Thomas L. Russell's book THE NO SIGNIFICANT DIFFERENCE PHENOMENON, a bibliography of 355 research reports, summaries, and papers that document no significant differences in student outcomes between alternate modes of education delivery.



 

"The Great Debate: Effectiveness of Technology in Education," by Patricia Deubel, T.H.E. Journal, November 2007 ---
http://www.thejournal.com/articles/21544

According to Robert Kuhn (2000), an expert in brain research, few people understand the complexity of that change. Technology is creating new thinking that is "at once creative and innovative, volatile and turbulent" and "nothing less than a shift in worldview." The change in mental process has been brought about because "(1) information is freely available, and therefore interdisciplinary ideas and cross-cultural communication are widely accessible; (2) time is compressed, and therefore reflection is condensed and decision-making is compacted; (3) individuals are empowered, and therefore private choice and reach are strengthened and one person can have the presence of an institution" (sec: Concluding Remarks).

If we consider thinking as both individual (internal) and social (external), as Rupert Wegerif (2000) suggests, then "[t]echnology, in various forms from language to the internet, carries the external form of thinking. Technology therefore has a role to play through supporting improved social thinking (e.g. providing systems to mediate decision making and collective reasoning) and also through providing tools to help individuals externalize their thinking and so to shape their own social worlds" (p. 15).

The new tools for communication that have become part of the 21st century no doubt contribute to thinking. Thus, in a debate on effectiveness or on implementation of a particular tool, we must also consider the potential for creativity, innovation, volatility, and turbulence that Kuhn (2000) indicates.

Continued in article

Bob Jensen's threads on education technology are at http://www.trinity.edu/rjensen/000aaa/0000start.htm


Questioning the Admissions Assumptions

And further, the study finds that all of the information admissions officers currently have (high school grades, SAT/ACT scores, essays, everything)  is of limited value, and accounts for only 30 percent of the grade variance in colleges — leaving 70 percent of the variance unexplained.
Scott Jaschik, "Questioning the Admissions Assumptions," Inside Higher Ed, June 19, 2007 --- http://www.insidehighered.com/news/2007/06/19/admit

The report is available at http://cshe.berkeley.edu/publications/docs/ROPS.GEISER._SAT_6.12.07.pdf

Roland G. Fryer, who was hired by Schools Chancellor Joel I. Klein to advise him on how to narrow the racial gap in achievement in the city’s schools, made his professional name in economics by applying complex algorithms to document how black students fall behind their white peers. But his life story challenges his own calculations. . . . His first job, though, he said, will be to mine data — from graduation rates to test scores to demographic information — to find out why there are wide gulfs between schools. Why, for example, does one school in Bedford-Stuyvesant do so much better than a school just down the block? And he will monitor the pilot program to pay fourth- and seventh-grade students as much as $500 for doing well on a series of standardized tests. That program will begin in 40 schools this fall. He hopes to find other ways to motivate students.
Jennifer Medina, "His Charge: Find a Key to Students’ Success," The New York Times, June 21, 2007 --- http://www.nytimes.com/2007/06/21/nyregion/21fryer.html?_r=1&oref=slogin

Jensen Comment
I suspect that SAT scores are more predictive for some college graduates than others. For example. SAT math performance may be a better predictor of grades in mathematics and science courses than SAT verbal performance is a predictor of grades in literature and language courses. The study mentioned above does not delve into this level of detail. Top universities that have dropped SAT requirements (e.g., under the Texas Top Ten Percent Law) are not especially happy about losing so many top SAT performers --- http://www.trinity.edu/rjensen/HigherEdControversies.htm#10PercentLaw

SAT/ACT testing falls down because it does not examine motivation vary well. High school grades fail because of rampant grade inflation and lowered academic standards in high schools. College grades are not a good criterion because of grade inflation in colleges --- http://www.trinity.edu/rjensen/assess.htm#GradeInflation


Question
What factors most heavily influence student performance and desire to take more courses in a given discipline?

Answer
These outcomes are too complex to be predicted very well. Sex and age of instructors have almost no impact. Teaching evaluations have a very slight impact, but there are just too many complexities to find dominant factors cutting across a majority of students.

Oreopoulos said the findings bolster a conclusion he came to in a previous academic paper that subjective qualities, such as how a professor fares on student evaluations, tell you more about how well students will perform and how likely they are to stay in a given course than do observable traits such as age or gender. (He points out, though, that even the subjective qualities aren’t strong indicators of student success.) “If I were concerned about improving teaching, I would focus on hiring teachers who perform well on evaluations rather than focus on age or gender,” he said.
Elia Powers, "Faculty Gender and Student Performance," Inside Higher Ed, June 21, 2007 --- http://www.insidehighered.com/news/2007/06/21/gender

Jensen Comment
A problem with increased reliance on teaching evaluations to measure performance of instructors is that this, in turn, tends to grade inflation --- http://www.trinity.edu/rjensen/assess.htm#GradeInflation

 

 


 

Question
What parts of a high school curriculum are the best predictors of success as a science major in college?

New research by professors at Harvard University and the University of Virginia has found that no single high school science course has an impact beyond that type of science, when it comes to predicting success in college science. However, the researchers found that a rigorous mathematics curriculum in high school has a significant impact on performance in college science courses. The research, which will be published in Science, runs counter to the “physics first” movement in which some educators have been advocating that physics come before biology and chemistry in the high school curriculum. The study was based on analysis of a broad pool of college students, their high school course patterns, and their performance in college science.
Inside Higher Ed, July 27, 2007 --- http://www.insidehighered.com/news/2007/07/27/qt

Jensen Comment
Now we have this when some colleges are trying to promote applications and admissions by dropping the SAT testing requirements for admission. In Texas, the Top 10% of any state high school class do not have to even take the SAT for admission to any state university in Texas. Of course high schools may still have a rigorous mathematics curriculum, but what high school student aiming for the 10% rule is going to take any rigorous course that is not required for high school graduation? The problem is that rigorous elective courses carry a higher risk of lowering the all-important grade point average.

Bob Jensen's threads on higher education controversies are at http://www.trinity.edu/rjensen/HigherEdControversies.htm


 

Grades are even worse than tests as predictors of success

"The Wrong Traditions in Admissions," by William E. Sedlacek, Inside Higher Ed, July 27, 2007 --- http://www.insidehighered.com/views/2007/07/27/sedlacek

Grades and test scores have worked well as the prime criteria to evaluate applicants for admission, haven’t they? No! You’ve probably heard people say that over and over again, and figured that if the admissions experts believe it, you shouldn’t question them. But that long held conventional wisdom just isn’t true. Whatever value tests and grades have had in the past has been severely diminished. There are many reasons for this conclusion, including greater diversity among applicants by race, gender, sexual orientation and other dimensions that interact with career interests. Predicting success with so much variety among applicants with grades and test scores asks too much of those previous stalwarts of selection. They were never intended to carry such a heavy expectation and they just can’t do the job anymore, even if they once did. Another reason is purely statistical. We have had about 100 years to figure out how to measure verbal and quantitative skills better but we just can’t do it.

Grades are even worse than tests as predictors of success. The major reason is grade inflation. Everyone is getting higher grades these days, including those in high school, college, graduate, and professional school. Students are bunching up at the top of the grade distribution and we can’t distinguish among them in selecting who would make the best student at the next level.

We need a fresh approach. It is not good enough to feel constrained by the limitations of our current ways of conceiving of tests and grades. Instead of asking; “How can we make the SAT and other such tests better?” or “How can we adjust grades to make them better predictors of success?” we need to ask; “What kinds of measures will meet our needs now and in the future?” We do not need to ignore our current tests and grades, we need to add some new measures that expand the potential we can derive from assessment.

We appear to have forgotten why tests were created in the first place. While they were always considered to be useful in evaluating candidates, they were also considered to be more equitable than using prior grades because of the variation in quality among high schools.

Test results should be useful to educators — whether involved in academics or student services — by providing the basis to help students learn better and to analyze their needs. As currently designed, tests do not accomplish these objectives. How many of you have ever heard a colleague say “I can better educate my students because I know their SAT scores”? We need some things from our tests that currently we are not getting. We need tests that are fair to all and provide a good assessment of the developmental and learning needs of students, while being useful in selecting outstanding applicants. Our current tests don’t do that.

The rallying cry of “all for one and one for all” is one that is used often in developing what are thought of as fair and equitable measures. Commonly, the interpretation of how to handle diversity is to hone and fine-tune tests so they are work equally well for everyone (or at least to try to do that). However, if different groups have different experiences and varied ways of presenting their attributes and abilities, it is unlikely that one could develop a single measure, scale, test item etc. that could yield equally valid scores for all. If we concentrate on results rather than intentions, we could conclude that it is important to do an equally good job of selection for each group, not that we need to use the same measures for all to accomplish that goal. Equality of results, not process is most important.

Therefore, we should seek to retain the variance due to culture, race, gender, and other aspects of non-traditionality that may exist across diverse groups in our measures, rather than attempt to eliminate it. I define non-traditional persons as those with cultural experiences different from those of white middle-class males of European descent; those with less power to control their lives; and those who experience discrimination in the United States.

While the term “noncognitive” appears to be precise and “scientific” sounding, it has been used to describe a wide variety of attributes. Mostly it has been defined as something other than grades and test scores, including activities, school honors, personal statements, student involvement etc. In many cases those espousing noncognitive variables have confused a method (e.g. letters of recommendation) with what variable is being measured. One can look for many different things in a letter. Robert Sternberg’s system of viewing intelligence provides a model, but is important to know what sorts of abilities are being assessed and that those attributes are not just proxies for verbal and quantitative test scores. Noncognitive variables appear to be in Sternberg’s experiential and contextual domains, while standardized tests tend to reflect the componential domain. Noncognitive variables are useful for all students, they are particularly critical for non-traditional students, since standardized tests and prior grades may provide only a limited view of their potential.

I and my colleagues and students have developed a system of noncognitive variables that has worked well in many situations. The eight variables in the system are self-concept, realistic self-appraisal, handling the system (racism), long range goals, strong support person, community, leadership, and nontraditional knowledge. Measures of these dimensions are available at no cost in a variety of articles and in a book, Beyond the Big Test.

This Web site has previously featured how Oregon State University has used a version of this system very successfully in increasing their diversity and student success. Aside from increased retention of students, better referrals for student services have been experienced at Oregon State. The system has also been employed in selecting Gates Millennium Scholars. This program, funded by the Bill & Melinda Gates Foundation, provides full scholarships to undergraduate and graduate students of color from low-income families. The SAT scores of those not selected for scholarships were somewhat higher than those selected. To date this program has provided scholarships to more than 10,000 students attending more than 1,300 different colleges and universities. Their college GPAs are about 3.25, with five year retention rates of 87.5 percent and five year graduation rates of 77.5 percent, while attending some of the most selective colleges in the country. About two thirds are majoring in science and engineering.

The Washington State Achievers program has also employed the noncognitive variable system discussed above in identifying students from certain high schools that have received assistance from an intensive school reform program also funded by the Bill & Melinda Gates Foundation. More than 40 percent of the students in this program are white, and overall the students in the program are enrolling in colleges and universities in the state and are doing well. The program provides high school and college mentors for students. The College Success Foundation is introducing a similar program in Washington, D.C., using the noncognitive variables my colleagues and I have developed.

Recent articles in this publication have discussed programs at the Educational Testing Service for graduate students and Tufts University for undergraduates that have incorporated noncognitive variables. While I applaud the efforts for reasons I have discussed here, there are questions I would ask of each program. What variables are you assessing in the program? Do the variables reflect diversity conceptually? What evidence do you have that the variables assessed correlate with student success? Are the evaluators of the applications trained to understand how individuals from varied backgrounds may present their attributes differently? Have the programs used the research available on noncognitive variables in developing their systems? How well are the individuals selected doing in school compared to those rejected or those selected using another system? What are the costs to the applicants? If there are increased costs to applicants, why are they not covered by ETS or Tufts?

Until these and related questions are answered these two programs seem like interesting ideas worth watching. In the meantime we can learn from the programs described above that have been successful in employing noncognitive variables. It is important for educators to resist half measures and to confront fully the many flaws of the traditional ways higher education has evaluated applicants.

William E. Sedlacek is professor emeritus at the University of Maryland at College Park. His latest book is Beyond the Big Test: Noncognitive Assessment in Higher Education

 


A different way to think about assessment

 

January 26, 2007 message from Carnegie President [carnegiepresident@carnegiefoundation.org]

A different way to think about ... assessment In the most recent issue of Change magazine, I join several other authors to examine higher education's ongoing responsibility to tell the story of student learning with care and precision. Fulfilling this responsibility at the institutional level requires ongoing deliberations among colleagues and stakeholders about the specific learning goals we seek and the broad educational purposes we espouse. What will motivate such discussions?

In this month's Carnegie Perspectives, Lloyd Bond makes a strong case for the use of common examinations as a powerful form of assessment as well as a fruitful context for faculty deliberations about their goals for students. Using an institutional example from the Carnegie/Hewlett project on strengthening teaching and learning at community colleges, Lloyd describes a particular example of this principle and how it supports faculty communication and student learning.

Carnegie has created a forum—Carnegie Conversations—where you can engage publicly with Lloyd and read and respond to what others have to say about this article at http://www.carnegiefoundation.org/perspectives/january2007

Or you may respond to the author privately through CarnegiePresident@carnegiefoundation.org

We look forward to hearing from you.

Sincerely,

Lee S. Shulman
President The Carnegie Foundation for the Advancement of Teaching


International Journal for the Scholarship of Teaching and Learning --- http://www.georgiasouthern.edu/ijsotl/


Just-In-Time Teaching --- http://134.68.135.1/jitt/

What is Just-in-Time Teaching?

G. Novak, gnovak@iupui.edu
Just-in-Time Teaching (JiTT for short) is a teaching and learning strategy based on the interaction between web-based study assignments and an active learner classroom. Students respond electronically to carefully constructed web-based assignments which are due shortly before class, and the instructor reads the student submissions "just-in-time" to adjust the classroom lesson to suit the students' needs. Thus, the heart of JiTT is the "feedback loop" formed by the students' outside-of-class preparation that fundamentally affects what happens during the subsequent in-class time together.

What is Just-in-Time Teaching designed to accomplish?

JiTT is aimed at many of the challenges facing students and instructors in today's classrooms. Student populations are diversifying. In addition to the traditional nineteen-year-old recent high school graduates, we now have a kaleidoscope of "non-traditional" students: older students, working part time students, commuting students, and, at the service academies, military cadets. They come to our courses with a broad spectrum of educational backgrounds, interests, perspectives, and capabilities that compel individualized, tailored instruction. They need motivation and encouragement to persevere. Consistent, friendly support can make the difference between a successful experience and a fruitless effort. It can even mean the difference between graduating and dropping out. Education research has made us more aware of learning style differences and of the importance of passing some control of the learning process over to the students. Active learner environments yield better results but they are harder to manage than lecture oriented approaches. Three of the "Seven Principles for Good Practice in Undergraduate Education" encourage student-faculty contact, increased time for student study, and cooperative learning between students.
To confront these challenges, the Just-in-Time Teaching strategy pursues three major goals:

What JiTT is Not

Although Just-in-Time Teaching makes heavy use of the web, it is not to be confused with either distance learning (DL) or with computer-aided instruction (CAI). Virtually all JiTT instruction occurs in a classroom with human instructors. The web materials, added as a pedagogical resource, act primarily as a communication tool and secondarily as content provider and organizer. JiTT is also not an attempt to 'process' large numbers of students by employing computers to do massive grading jobs.

The JiTT Feedback Loop

The Web Component

JiTT web pages fall into three major categories:

The Active Learner Classroom

The JiTT classroom session is intimately linked to the electronic preparatory assignments the students complete outside of class. Exactly how the classroom time is spent depends on a variety of issues such as class size, classroom facilities, and student and instructor personalities. Mini-lectures (10 min max) are often interspersed with demos, classroom discussion, worksheet exercises, and even hands-on mini-labs. Regardless, the common key is that the classroom component, whether interactive lecture or student activities, is informed by an analysis of various student responses.
In a JiTT classroom students construct the same content as in a passive lecture with two important added benefits. First, having completed the web assignment very recently, they enter the classroom ready to actively engage in the activities. Secondly, they have a feeling of ownership since the interactive lesson is based on their own wording and understanding of the relevant issues.
The give and take in the classroom suggests future WarmUp questions that will reflect the mood and the level of expertise in the class at hand. In this way the feedback loop is closed with the students having played a major part in the endeavor.
From the instructor's point of view, the lesson content remains pretty much the same from semester to semester with only minor shifts in emphasis. From the students' perspective, however, the lessons are always fresh and interesting, with a lot of input from the class.
We designed JiTT to improve student learning in our own classrooms and have been encouraged by the results, both attitudinal and cognitive. We attribute this success to three factors that enhance student learning, identified by Alexander Astin* in his thirty year study of college student success:
  By fostering these, JiTT promotes student learning and satisfaction.

*Astin, Alexander: What matters in college? Four critical years revisited (San Francisco, CA: Jossey-Bass Publishers, 1993).

 Bob Jensen's threads on tools and tricks of the trade are at http://www.trinity.edu/rjensen/000aaa/thetools.htm


What works in education?

Perhaps Colleges Should Think About This

"School Ups Grade by Going Online," by Cyrus Farivar, Wired News, October 12, 2004 --- http://www.wired.com/news/culture/0,1284,65266,00.html?tw=newsletter_topstories_html 

Until last year, Walt Whitman Middle School 246 in Brooklyn was considered a failing school by the state of New York.

But with the help of a program called HIPSchools that uses rapid communication between parents and teachers through e-mail and voice mail, M.S. 246 has had a dramatic turnaround. The premise behind "HIP" comes from Keys Technology Group's mission of "helping involve parents."

The school has seen distinct improvement in the performance of its 1300 students, as well as regular attendance, which has risen to 98 percent (an increase of over 10 percent) in the last two years according to Georgine Brown-Thompson, academic intervention services coordinator at M.S. 246.

Continued in the article


September 2, 2004 message from Carolyn Kotlas [kotlas@email.unc.edu

"CONSUMER REPORTS" FOR RESEARCH IN EDUCATION

The What Works Clearinghouse was established in 2002 by the U.S. Department of Education's Institute of Education Sciences with $18.5 million in funding to "provide educators, policymakers, researchers, and the public with a central and trusted source of scientific evidence of what works in education." The Clearinghouse reviews, according to relevance and validity, the "effectiveness of replicable educational interventions (programs, products, practices, and policies) that intend to improve student outcomes." This summer, the Clearinghouse released two of its planned reports: peer-assisted learning interventions and middle school math curricula. For more information about the What Works Clearinghouse and descriptions of all topics to be evaluated, go to http://www.w-w-c.org/ 

See also:

"'What Works' Research Site Unveiled" by Debra Viadero EDUCATION WEEK, vol. 23, no. 42, pp. 1, 33, July 14, 2004 http://www.edweek.org/ew/ew_printstory.cfm?slug=42Whatworks.h23 

"'What Works' Site Opens Dialogue on Research" Letter to Editor from Talbot Bielefeldt, Center for Applied Research in Educational Technology, International Society for Technology in Education EDUCATION WEEK, vol. 23, no. 44, p. 44, August 11, 2004 http://www.edweek.org/ew/ew_printstory.cfm?slug=44Letter.h23 

April 1, 2005 message from Carolyn Kotlas [kotlas@email.unc.edu]

NEW EDUCAUSE E-BOOK ON THE NET GENERATION

EDUCATING THE NET GENERATION, a new EDUCAUSE e-book of essays edited by Diana G. Oblinger and James L. Oblinger, "explores the Net Gen and the implications for institutions in areas such as teaching, service, learning space design, faculty development, and curriculum." Essays include: "Technology and Learning Expectations of the Net Generation;" "Using Technology as a Learning Tool, Not Just the Cool New Thing;" "Curricula Designed to Meet 21st-Century Expectations;" "Faculty Development for the Net Generation;" and "Net Generation Students and Libraries." The entire book is available online at no cost at http://www.educause.edu/educatingthenetgen/ .

EDUCAUSE is a nonprofit association whose mission is to advance higher education by promoting the intelligent use of information technology. For more information, contact: Educause, 4772 Walnut Street, Suite 206, Boulder, CO 80301-2538 USA; tel: 303-449-4430; fax: 303-440-0461; email: info@educause.edu;  Web: http://www.educause.edu/

See also:

GROWING UP DIGITAL: THE RISE OF THE NET GENERATION by Don Tapscott McGraw-Hill, 1999; ISBN: 0-07-063361-4 http://www.growingupdigital.com/


EFFECTIVE E-LEARNING DESIGN

"The unpredictability of the student context and the mediated relationship with the student require careful attention by the educational designer to details which might otherwise be managed by the teacher at the time of instruction." In "Elements of Effective e-Learning Design" (INTERNATIONAL REVIEW OF RESEARCH IN OPEN AND DISTANCE LEARNING, March 2005) Andrew R. Brown and Bradley D. Voltz cover six elements of effective design that can help create effective e-learning delivery. Drawing upon examples from The Le@rning Federation, an initiative of state and federal governments of Australia and New Zealand, they discuss lesson planning, instructional design, creative writing, and software specification. The paper is available online at http://www.irrodl.org/content/v6.1/brown_voltz.html 

International Review of Research in Open and Distance Learning (IRRODL) [ISSN 1492-3831] is a free, refereed ejournal published by Athabasca University - Canada's Open University. For more information, contact Paula Smith, IRRODL Managing Editor; tel: 780-675-6810; fax: 780-675-672; email: irrodl@athabascau.ca ; Web: http://www.irrodl.org/

The Le@rning Federation (TLF) is an "initiative designed to create online curriculum materials and the necessary infrastructure to ensure that teachers and students in Australia and New Zealand can use these materials to widen and enhance their learning experiences in the classroom." For more information, see http://www.thelearningfederation.edu.au/


COMPUTERS IN THE CLASSROOM AND OPEN BOOK EXAMS

In "PCs in the Classroom & Open Book Exams" (UBIQUITY, vol. 6, issue 9, March 15-22, 2005), Evan Golub asks and supplies some answers to questions regarding open-book/open-note exams. When classroom computer use is allowed and encouraged, how can instructors secure the open-book exam environment? How can cheating be minimized when students are allowed Internet access during open-book exams? Golub's suggested solutions are available online at
http://www.acm.org/ubiquity/views/v6i9_golub.html


May 5, 2005 message from Carolyn Kotlas [kotlas@email.unc.edu]

TEACHING, TEACHING TECHNOLOGIES, AND VIEWS OF KNOWLEDGE

In "Teaching as Performance in the Electronic Classroom" (FIRST MONDAY, vol. 10, no. 4, April 2005), Doug Brent, professor in the Faculty of Communication and Culture at the University of Calgary, presents two views of teaching: teaching as a "performance" and teaching as a transfer of knowledge through text, a "thing." He discusses the social groups that have stakes in each view and how teaching will be affected by the view and group that gains primacy. "If the group that values teaching as performance has the most influence, we will put more energy into developing flexible courseware that promotes social engagement and interaction. . . . If the group that sees teaching as textual [i.e., a thing] has the most influence, we will develop more elaborate technologies for delivering courses as online texts, emphasising the role of the student as audience rather than as participant." Brent's paper is available online at http://firstmonday.org/issues/issue10_4/brent/index.html .

First Monday [ISSN 1396-0466] is an online, peer-reviewed journal whose aim is to publish original articles about the Internet and the global information infrastructure. It is published in cooperation with the University Library, University of Illinois at Chicago. For more information, contact: First Monday, c/o Edward Valauskas, Chief Editor, PO Box 87636, Chicago IL 60680-0636 USA; email: ejv@uic.edu; Web: http://firstmonday.dk/.

......................................................................

LAPTOPS IN THE CLASSROOM

The theme for the latest issue of NEW DIRECTIONS FOR TEACHING AND LEARNING (vol. 2005, issue 101, Spring 2005) is "Enhancing Learning with Laptops in the Classroom." Centered on the faculty development program at Clemson University, the issue's purpose is "to show that university instructors can and do make pedagogically productive and novel use of laptops in the classroom" and "to advise institutional leaders on how to make a laptop mandate successful at their university." The publication is available online http://www3.interscience.wiley.com/cgi-bin/jhome/86011233 .

New Directions for Teaching and Learning [ISSN: 0271-0633], a quarterly journal published by Wiley InterScience, offers a "comprehensive range of ideas and techniques for improving college teaching based on the experience of seasoned instructors and on the latest findings of educational and psychological researchers." The journal is available both in print and online formats.

......................................................................

NEW E-JOURNAL ON LEARNING AND EVALUATION

STUDIES IN LEARNING, EVALUATION, INNOVATION AND DEVELOPMENT is a new peer-reviewed electronic journal that "supports emerging scholars and the development of evidence-based practice and that publishes research and scholarship about teaching and learning in formal, semi-formal and informal educational settings and sites." Papers in the current issue include:

"Can Students Improve Performance by Clicking More? Engaging Students Through Online Delivery" by Jenny Kofoed

"Managing Learner Interactivity: A Precursor to Knowledge Exchange" by Ken Purnell, Jim Callan, Greg Whymark and Anna Gralton

"Online Learning Predicates Teamwork: Collaboration Underscores Student Engagement" by Greg Whymark, Jim Callan and Ken Purnell

Studies in Learning, Evaluation, Innovation and Development [ISSN 1832-2050] will be published at least once a year by the LEID (Learning, Evaluation, Innovation and Development) Centre, Division of Teaching and Learning Services, Central Queensland University, Rockhampton, Queensland 4702 Australia. For more information contact: Patrick Danaher, tel: +61-7-49306417; email: p.danaher@cqu.edu.au. Current and back issues are available at http://www.sleid.cqu.edu.au/index.php .


Bob Jensen's threads on education resources are at http://www.trinity.edu/rjensen/000aaa/newfaculty.htm#Resources 

Bob Jensen's threads on assessment are at http://www.trinity.edu/rjensen/assess.htm 

 


September 2, 2004 message from Carolyn Kotlas [kotlas@email.unc.edu

SURVEY ON QUALITY AND EXTENT OF ONLINE EDUCATION

The Sloan Consortium's 2003 Survey of Online Learning wanted to know would students, faculty, and institutions embrace online education as a delivery method and would the quality of online education match that of face-to-face instruction. The survey found strong evidence that students are willing to sign up for online courses and that institutions consider online courses part of a "critical long-term strategy for their institution." It is less clear that faculty have embraced online teaching with the same degree of enthusiasm. The survey's findings are available in "Sizing the Opportunity: The Quality & Extent of Online Education in the U.S., 2002 and 2003" by I. Elaine Allen and Jeff Seaman, Sloan Center for Online Education at Olin and Babson Colleges. The complete report is online at http://www.sloan-c.org/resources/sizing_opportunity.pdf 

The Sloan Consortium (Sloan-C) is a consortium of institutions and organizations committed "to help learning organizations continually improve quality, scale, and breadth of their online programs according to their own distinctive missions, so that education will become a part of everyday life, accessible and affordable for anyone, anywhere, at any time, in a wide variety of disciplines." Sloan-C is funded by the Alfred P. Sloan Foundation. For more information, see http://www.sloan-c.org/ 

Bob Jensen's threads on the dark side of distance education are at http://www.trinity.edu/rjensen/000aaa/theworry.htm 

 


Computer Grading of Essays

Sociology professor designs SAGrader software for grading student essays
Student essays always seem to be riddled with the same sorts of flaws. So sociology professor Ed Brent decided to hand the work off to a computer. Students in Brent's Introduction to Sociology course at the University of Missouri-Columbia now submit drafts through the SAGrader software he designed. It counts the number of points he wanted his students to include and analyzes how well concepts are explained. And within seconds, students have a score. It used to be the students who looked for shortcuts, shopping for papers online or pilfering parts of an assignment with a simple Google search. Now, teachers and professors are realizing that they, too, can tap technology for a facet of academia long reserved for a teacher alone with a red pen. Software now scores everything from routine assignments in high school English classes to an essay on the GMAT, the standardized test for business school admission. (The essay section just added to the Scholastic Aptitude Test for the college-bound is graded by humans). Though Brent and his two teaching assistants still handle final papers and grades students are encouraged to use SAGrader for a better shot at an "A."
"Computers Now Grading Students' Writing," ABC News, May 8, 2005 ---
http://abcnews.go.com/Technology/wireStory?id=737451
Jensen Comment:  Aside from some of the obvious advantages such as grammar checking, students should have a more difficult time protesting that the grading is subjective and unfair in terms of the teacher's alleged favored versus less-favored students.  Actually computers have been used for some time in grading essays, including the GMAT graduate admission test --- http://www.yaledailynews.com/article.asp?AID=723

References to computer grading of essays --- http://coeweb.fiu.edu/webassessment/references.htm

You can read about PEG at http://snipurl.com/PEGgrade


MEDICAL- AND BUSINESS-SCHOOL ADMISSION TESTS WILL BE GIVEN BY COMPUTER
Applicants to medical and business schools will soon be able to leave their No. 2 pencils at home.  Both the Medical College Admission Test and the Graduate Management Admission Test are ditching their paper versions in favor of computer formats. The Association of American Medical Colleges has signed a contract with Thomson Prometric, part of the Thomson Corporation, to offer the computer-based version of the MCAT beginning in 2007.  The computerized version is being offered on a trial basis in a few locations until then.The GMAT, which has been offered both on paper and by computer since 1997, will be offered only by computer starting in January, officials of the Graduate Management Admission Council said.  The test will be developed by ACT Inc. and delivered by Pearson VUE, a part of Pearson Education Inc.The Law School Admission Council has no immediate plans to change its test, which will continue to be given on paper.
The Chronicle of Higher Education, August 5, 2005, Page A13

Jensen Comment:  Candidates for the CPA are now allowed to only take this examination via computer testing centers.  The GMAT has been an optional computer test since 1997.  For years the GMAT has used computerized grading of essay questions and was a pioneer in this regard. 


Assessment in General

Assessment/Learning Issues
Measurement and the No-Significant Differences

Assessment of new technology in learning is impossible to formally evaluate with both rigor and practicality. The main problem is the constantly changing technology. By the time assessment research is made available, the underlying technologies may have been improved to a point where the findings are no longer relevant under the technologies existing at the time of the research.  What can be done for students after my university installed a campus-wide network is vastly different than the before-network days. A classroom failure using last year's technology may not be appropriate to compare with a similar effort using newer technology. For example, early LCD panel projections from computers in classrooms were awful in the early 1990s.   In the beginning, LCD panels had no color and had to be used in virtually dark classrooms. This was a bad experience for most students and instructors (including me). Then new technology in active matrix LCD panels led to color but the classrooms still had to be dark. Shortly thereafter, new technologies in overhead projection brightness allowed for more lighting in classrooms while using LCD panels. However, many classrooms are not yet equipped with light varying controls to optimally set lighting levels. Newer trends with even better three-beam projectors and LCD data projectors changed everything for electronic classrooms, because now classrooms can have normal lighting as long as lights are not aimed directly at the screen. The point here is that early experiences with the first LCD panel technology are no longer relevant in situations where the latest projection technology, especially in fully equipped electronic classrooms, is available. Unfortunately, there is a tendency among some faculty to be so discouraged by one or two failed attempts that they abandon future efforts using newer technologies.  

One of the most creative attempts to evaluate effectiveness from a Total Quality Management (TQM) perspective is reported by Prabhu and Ramarapu (1994). This is an attempt to measure learning using a TQM database that can be used to compare alternative teaching methods or entire programs.  [Prabhu, S.S. and N.K. Ramarapu (1994). “A prototype database to monitor course effectiveness: A TQM approach,” T H E Technological Horizons in Education, October, 99-103.]

It is easy to become discouraged with first efforts using older technologies. Many faculty and students became highly frustrated with the early complexities of using the Internet and/or campus networks that were not user friendly. Unless they took the time and trouble to become well versed in UNIX programming and became experienced hackers, the Internet turned into a totally discouraging nightmare. Now with the WWW and many other user-friendly innovations in campus and international networking, the need to become an experienced hacker is vastly reduced.


From The Wall Street Journal Accounting Weekly Review on November 17, 2006

TITLE: Colleges, Accreditors Seek Better Ways to Measure Learning
REPORTER: Daniel Golden
DATE: Nov 13, 2006 PAGE: B1
LINK: http://online.wsj.com/article/SB116338508743121260.html?mod=djem_jiewr_ac 
TOPICS: Accounting

SUMMARY: The article discusses college- or university-wide accreditation by regional accreditation bodies and reaction to the Spellings Commission report. Questions extend the accreditation discussion to AACSB accreditation.

QUESTIONS:
1.) What is accreditation? The article describes university-wide accreditation by regional accrediting bodies. Why is this step necessary?

2.) Does your business school have accreditation by Association to Advance Collegiate Schools of Business (AACSB)? How does this accreditation differ from university-wide accreditation?

3.) Why are regional accrediting agencies planning to meet with Secretary Spellings?

4.) Did you consider accreditation in deciding where to go to college or university? Why or why not?

5.) Do you think improvements in assessing student learning are important, as the Spellings Commission argues and accreditors are now touting? Support your answer.

SMALL GROUP ASSIGNMENT: Find out about your college or university's accreditation. When was the last accreditation review? Were there any concerns expressed by the accreditors? How has the university responded to any concerns expressed?

Once these data are gathered, discuss in class in groups:

Has this information been easy or difficult to find? Do you agree with the assessment of concerns about the institution and/or the university's responses?

Reviewed By: Judy Beckman, University of Rhode Island

TITLE: Colleges, Accreditors Seek Better Ways to Measure Learning
REPORTER: Daniel Golden
DATE: Nov 13, 2006 PAGE: B1
LINK: http://online.wsj.com/article/SB116338508743121260.html?mod=djem_jiewr_ac 

At the University of the South, a highly regarded liberal-arts college in Sewanee, Tenn., the dozen professors who teach the required freshman Shakespeare course design their classes differently, assigning their favorite plays and writing and grading their own exams.

But starting next fall, one question on the final exam will be the same across all of the classes, and instructors won't grade their own students' answers to that question. Instead, to assure more objective evaluation, the professors will trade exams and grade each other's students.

The English department adopted this change -- despite faculty grumbling about losing some classroom independence -- under pressure from the Southern Association of Schools and Colleges. The association, one of the six regional groups that accredit nearly 3,000 U.S. colleges, told the University of the South that, to have its accreditation renewed, it would have to do a better job of measuring student learning. Without such accreditation, the school's students wouldn't qualify for federal financial aid.

The shift "does cut into the individual faculty member's autonomy, and that's disturbing," says Jennifer Michael, an associate professor. "On the other hand, it's making us think about how do we figure out what students are actually learning. Maybe having them take and pass a course doesn't mean they've learned everything we think they have."

Regional accreditors used to limit their examinations to colleges' financial solvency and educational resources, with the result that well-established schools enjoyed rubber-stamp approval. But now they are increasingly holding colleges, prestigious or not, responsible for undergraduates' grasp of such skills as writing and critical thinking. And prodded by regional accreditors, colleges are adopting various means of assessing learning in addition to classroom grades, from electronic portfolios that collect a student's work from different courses to standardized testing and special projects for graduating seniors.

The accreditors aren't moving fast enough for the Bush administration, though. In the wake of a federally sponsored study published in 2005 that showed declining literacy among college-educated Americans, Secretary of Education Margaret Spellings and a commission she appointed on the future of higher education want colleges to be more accountable for -- and candid about -- student performance, and they have criticized accreditors as barriers to reform.

Congress sets the standards for accreditors, and the Education Department periodically reviews compliance with those standards. Congress identified "success with respect to student achievement" as a requirement for accreditation in 1992, and then in 1998 made it the top priority. That imperative, along with the advent of online education, has spurred accreditors to rethink their longtime emphasis on such criteria as the number of faculty members with doctorates. Since 2000, several regional accreditors have revamped their rules to emphasize student learning.

"Accreditors have moved the ball forward," says Kati Haycock, a member of the Spellings commission and the director of the nonprofit Education Trust in Washington, D.C., which seeks better schooling for disadvantaged students. "Not far enough, not fast enough, but they have moved the ball forward."

An issue paper written for the commission by Robert Dickeson, a former president of the University of Northern Colorado, complained that accreditation "currently settles for meeting minimum standards," and it called for replacing regional accreditors with a new national foundation. "Technology has rendered the quaint jurisdictional approach to accreditation obsolete," Mr. Dickeson wrote.

The commission didn't endorse that recommendation, but its final report last month cited "significant shortcomings" in accreditation and called for "transformation" of the process. In a Sept. 22 speech marking the release of the report, Secretary Spellings said that accreditors are "largely focused on inputs, more on how many books are in a college library than whether students can actually understand them....That must change."

David Ward, a commission member and the president of the American Council on Education, a higher education advocacy group, declined to sign the report, in part because he objected to its criticism of accreditors as overly simplistic.

Russell Edgerton, president emeritus of the American Association for Higher Education, says "there's no question that American colleges are underachieving," but he argues that accreditors are rising to the challenge. "Ten years ago, I would have said that regional accreditors are dead in the water and asleep at the wheel," he says. But "there's been a kind of renaissance within accreditation agencies in the past five to six years. They're helping institutions create a culture of evidence about student learning."

Mr. Edgerton also thinks the federal government's emphasis on new accountability measures is flawed because it bypasses the judgment of traditional arbiters like faculty and accreditors. "The danger is that the standardized testing approach in K-12 would slop over into higher education," he says. "Higher ed is different."

Jerome Walker, associate provost and accreditation liaison officer for the University of Southern California, agrees that the administration's attacks on accreditors are unfair. The Western Association of Schools and Colleges, which accredits USC, "has been extremely sensitive" to student learning, he says.

According to the Western Association's executive director, Ralph Wolff, the group revamped its standards in 2001 to require colleges to identify preparation needed by entering freshmen and the expectations for student progress in critical thinking, quantitative reasoning and other skills. Its accreditation process now takes four years, up from 1½, and it features a detailed, peer-reviewed proposal for improvement and two site visits, including one devoted to "educational effectiveness."

Historically, research universities like USC "used to blow off" accreditation, Mr. Wolff says. "Now this has become a real challenge for them in a good way."

Encouraged by Mr. Wolff, USC last year assigned the same two essay questions -- one about conformity, another based on a quotation from ethicist Robert Bellah -- to freshmen in a beginning writing course and juniors and seniors in an advanced course. A group of faculty then evaluated the essays without knowing the students' names or which course they were taking. The reassuring outcome, according to Richard Fliegel, assistant dean for academic programs, was that juniors and seniors "demonstrated significantly more critical thinking skills" than freshmen, and that advanced students who had taken the first-year course outperformed transfer students who hadn't taken beginning writing at USC.

Because the writing initiative is tailored to USC's curriculum, the results -- while helpful to administrators and accreditors -- wouldn't necessarily help the public compare USC to other schools. That is a big drawback as far as the Bush administration is concerned. "I have two kids in college now," says Vickie Schray, deputy director of the Spellings commission. "It's a huge expense. Yet there's very little information on return of investment or ability to shop around for the greatest value."

She adds, though, that it is a "misconception" to think that the administration wants to have "one standardized test for all institutions" or to extend the testing requirements of the "No Child Left Behind" law for K-12 schools to higher education.

Even so, one standardized test of critical thinking, the Collegiate Learning Assessment, is becoming popular. It adjusts for students' scores on the SAT and ACT college-entrance exams, potentially allowing more meaningful comparisons of the value added by colleges. The number of schools using the assessment has soared from 54 two years ago to 170 this year. Among those using the test this fall: the University of Texas at Austin, Duke University, Arizona State University and Washington and Lee University.

Roger Benjamin, president of the nonprofit Council for Aid to Education, which sponsors the test, says state officials and university administrators have been the principal forces behind its increasing use. "Accreditors are coming to the party, but a bit late," Mr. Benjamin says.

Meanwhile, Secretary Spellings plans to meet with accreditors in late November to discuss how to "accelerate the focus on student achievement," Ms. Schray says. Accreditors say they welcome the opportunity to tout their progress. "We have made a lot of reforms," says the Western Association's Mr. Wolff. "We'd like to bring the secretary up-to-date on the significance of these reforms and the impact they're already having on institutions."

 

 


January 6, 2006 message from Carolyn Kotlas [kotlas@email.unc.edu]

No Significant Difference Phenomenon website http://www.nosignificantdifference.org/ 

The website is a companion piece to Thomas L. Russell's book THE NO SIGNIFICANT DIFFERENCE PHENOMENON, a bibliography of 355 research reports, summaries, and papers that document no significant differences in student outcomes between alternate modes of education delivery.


International Society for Technology in Education --- http://www.iste.org/ 

ISTE is a nonprofit professional organization with a worldwide membership of leaders and potential leaders in educational technology. We are dedicated to providing leadership and service to improve teaching and learning by advancing the effective use of technology in K–12 education and teacher education. We provide our members with information, networking opportunities, and guidance as they face the challenge of incorporating computers, the Internet, and other new technologies into their schools.

Home of the National Educational Technology Standards (NETS), the Center for Applied Research in Education Technology (CARET), and the National Educational Computing Conference (NECC), ISTE meets its mission through knowledge generation, professional development, and advocacy. ISTE also represents and informs its membership regarding educational issues of national scope through ISTE–DC. We support a worldwide network of Affiliates and Special Interest Groups (SIGs), and we offer our members the latest information through our periodicals and journals.

 

An organization of great diversity, ISTE leads through presenting innovative educational technology books and programs; conducting professional development workshops, forums, and symposia; and researching, evaluating, and disseminating findings regarding educational technology on an international level. ISTE’s Web site, www.iste.org, contains coverage of many topics relevant to the educational technology community.

Bookstore. L&L. NECC, NETS. About ISTE, Educator Resources, Join!, Membership, Affiliates

ISTE 100, SIGs, Professional Development, Publications, Research Projects, Standards Projects, Site Map

 


"Surveying the Digital Landscape: Evolving Technologies 2004," Educause Review, vol. 39, no. 6 (November/December 2004): 78–92. --- http://www.educause.edu/apps/er/erm04/erm0464.asp 

Each year, the members of the EDUCAUSE Evolving Technologies Committee identify and research the evolving technologies that are having the most direct impact on higher education institutions. The committee members choose the relevant topics, write white papers, and present their findings at the EDUCAUSE annual conference.

December 9, 2004 message from Ed Scribner [escribne@nmsu.edu

Bob,

Thanks for that EDUCASE link. Who among us old-timers from the mainframe BITNET days would have predicted that “spam management” would top the list of influential campus technologies in 2004?

While following the link you sent, I noticed that Wesleyan has a nicely crafted set of assessment links that you probably already have, but it didn’t turn up in my search of trinity.edu:

Information Technology Services Assessment --- http://www.wesleyan.edu/its/acs/assessment.htt 

Ed Scribner 
New Mexico State

Bob Jensen discusses the long term future of education technologies at http://www.trinity.edu/rjensen/000aaa/updateee.htm#Future 

 


From T.H.E. Journal, April 2004 --- http://www.thejournal.com/magazine/vault/M2664.cfm 

Exclusive Series: SBR

"High (School)-Tech: The Effect of Technology on Student Achievement in Grades 7-12," by Neal Starkman, T.H.E.'s The Focus Newsletter, April 15, 2004 --- http://www.thejournal.com/thefocus/37.cfm 

In Lincolnshire, Ill., teachers at Adlai E. Stevenson High School are mandated to be proficient in the “operation and conceptualization of hardware and networks, applications, information tools, and presentation tools.”

In Scott County, Ky., students throughout the school district participate in a Digital Storytelling Project. The project lets students select an appropriate story, restructure the story in response to a “seven elements” model, create storyboards, gather content, produce videos, and share them at a Digital Storytelling Festival.

In Granger, Ind., eighth-grade students at Discovery Middle School produce a seven-minute news broadcast every morning. They make the assignments; organize the crew; set up camera equipment; block the shots; instruct others, including adults, in their roles in the production; and read the news.

And in Redmond, Wash., Tom Charouhas, a science teacher at Rose Hill Junior High School, uses “probeware” to show students how to determine the force needed to maintain mechanical efficiency in pulley systems. By using probeware, students can actually see the results of their actions on numerous pulleys.

What's going on here?

It's technology in the classroom: word processing programs, e-mail, databases and spreadsheets, modeling software, closed-circuit television, computer networks, CD-ROM encyclopedias, network search tools, desktop publishing, videotape recording and editing equipment, and the list goes on and on. What the chalkboard was to the 20th-century classroom, the computer is to the 21st-century classroom. The one important difference is that the concept of the chalkboard didn't change much over the decades; however, we're just at the beginning of the evolution of the computer as a teaching and learning tool.

Lake Washington School District, which includes Tom Charouhas' school and 41 others, is a good example of how far technology has traveled in schools. The North Central Regional Educational Laboratory (NCREL), online at http://www.ncrel.org, reports that the district started wiring its schools back in 1989. Today, there is a computer for every four students, and the district even has its own channel on cable TV. The district is also committed to renewing its hardware every five years for desktops and every four years for laptops, in addition to training all of its 1,300 teachers (no teacher proficiency, no computer upgrade). Charouhas has seen a “slow and steady climb” in not only the expertise of teachers in technology but also, and much more importantly, the expertise of students. “You can talk about concepts until you're blue in the face,” he says, but he believes that students really learn the science when they actually do the science.

But, is it as easy as that? Is it just a matter of “wiring”? The Center for Applied Research in Educational Technology (CARET), online at http://caret.iste.org, has compiled evidence on just what impact technology has had on student performance. It's concluded that technology improves student performance when the application has the following characteristics:

Curriculum. It directly supports the curriculum objectives being assessed.

Collaboration. It provides opportunities for student collaboration.

Feedback. It adjusts for student ability and prior experience, and provides feedback to the student and teacher about student performance or progress with the application.

Integration. It is integrated into the typical instructional day.

Assessment. It provides opportunities for students to design and implement projects that extend the curriculum content being assessed by a particular standardized test.

Support. It is used in environments where teachers, the school community, and school and district administrators support the use of technology.

None of this, of course, should be surprising. As Charouhas says, “The use of the technology cannot supersede the content… [and] the most important [component] of any classroom is the teacher.”

Elliot Wolfe can attest to that. Wolfe, a senior at Seattle's Garfield High School, takes classes at Seattle Central Community College as part of a program called Running Start. On March 11, he made a presentation on native Catholic boarding schools using PowerPoint and an LCD projector. His 18 slides included photographs and facts about the nature of the classes in boarding schools, where the schools were located, and how many students attended each over a period of time. He used the slides to illustrate the main points and then orally elaborated on them over the course of about 10 minutes.

Was it effective? Sure. All of us, including students, learn in various ways (e.g., auditorily, visually, kinesthetically), and the more of those ways a teacher can employ, the greater the chance of learning. But is it a panacea?

Continued in the article


"Technology's Impact on Academic Achievement," by Samuel Besalel, T.H.E. Journal, January 22, 2004 ---  http://www.thejournal.com/thefocus/33.cfm 

This issue is the first of two articles that focus on the impact of technology on academic achievement. When examining technology's contributions to education, age is not a factor. Throughout every age group, students benefit from technology in the classroom.

There is a wide range of technology used in schools today, from desktop computers in classrooms and labs to digital whiteboards, digital projectors, laptop computers, wireless network technologies, devices for special needs populations, and more.

In this issue, we will focus on how technology increases classroom efficiency and facilitates learning in educational settings from kindergarten through grade six.

We will also examine the kind of changes in student learning that occur as a direct result of technology.

The Link Between Technology and Achievement Technology in the classroom directly contributes to student achievement, both by making students more effective in their learning and teachers more efficient in their teaching.

Students are attracted to the use of computers, even for such mundane applications as playing math games and reading online books. But when used in this manner, don't they simply replace other possible teaching methods or learning tools? Are there really advantages to such uses?

Actually, yes. Particularly in primary grades, computers help to reinforce many basic skills. While a teacher might find it hard to sustain a child's attention to teach and re-teach math facts, or the spelling of the days of the week, students are much more tolerant of repetition from a computer program; in fact, they come to expect it. This is good news, because repetition is essential in areas such as beginner reading and the learning of almost any fact.

For example, students playing a math game can feel challenged by "beating the high score," making the learning of math both competitive and fun, while encouraging additional practice and drilling of facts.

Advantages are also to be had with online books. Efficient reading goes beyond being able to recognize letters and words. Phrasing is a key aspect of what good readers do. Many online book programs not only display the words of a book, with pictures or animations, but also include both an audio component and highlighting of phrases as the narrator works through the text. This provides an accurate model of what good readers do, helping to build fluent reading skills.

Teacher innovation has never been in short supply. The innovative approaches educators use to leverage technology to the benefit of their students is often more impressive than the technology itself.

With the appropriate targeting and application of technology, substantial gains can be made for student achievement. Various applications of technology can be effective when targeting primary school students to introduce logical concepts, mathematical equations, and cause and effect.

For example, I've witnessed effective lessons presented in a computer lab to 20 or more students using only a single PC and a digital projector.

Because people need to learn how to learn, computer interfaces often pose problems to older learners. This is often not the case with young students with fiercely inquisitive minds. Presented as play, I observed how kindergarteners were cannily introduced to methods to approach software programs. Using The Learning Company's Kid Pix (  http://www.kidpix.com ), the instructor quizzed students on their knowledge of seasons, nature and animals. Together (with the instructor "driving"), they composed a thematic painting for the fall harvesting season. The children observed how various menus of related objects were stored. Using the objects to simulate rubber stamps, together they designed a picture that used their current knowledge and eased them into more information. In the process, they learned to group relationships of animals and plants in higher and lower order (i.e., animals, animals with four legs, mammals) and were introduced to computer terminology such as select, delete, edit, click, and so forth.

Continued in the article


"Evaluating the Impact of Technology: The Less Simple Answer," by Doug Johnson, Educational Technology Journal, January/February 1996 --- http://www.fno.org/jan96/reply.html 

From the National School Boards Association --- http://www.nsba.org/sbot/toolkit/tiol.html 

From a Department of Education 1995 forum, some panelists contended that rather than debating the connections between technology-based instruction and test scores, schools should focus on the most obvious and compelling reason form implementing technology-namely, that students need strong technology skills to succeed in the world of work. This section will provide you with the impact technology has on learning.

You can find the following in this section:

ED Report The Costs and Effectiveness of Educational Technology

"Through the use of advanced computing and telecommunications technology, learning can also be qualitatively different. The process of learning in the classroom can become significantly richer as students have access to new and different types of information, can manipulate it on the computer through graphic displays or controlled experiments in ways never before possible, and can communicate their results and conclusions in a variety of media to their teacher, students in the next classroom, or students around the world. For example, using technology, students can collect and graph real-time weather, environmental, and populations data from their community, use that data to create color maps and graphs, and then compare these maps to others created by students in other communities. Similarly, instead of reading about the human circulatory system and seeing textbook pictures depicting bloodflow, students can use technology to see blood moving through veins and arteries, watch the process of oxygen entering the bloodstream, and experiment to understand the effects of increased pulse or cholesterol-filled arteries on blood flow." (page 16)

"We know now - based on decades of use in schools, on findings of hundreds of research studies, and on the everyday experiences of educators, students, and their families - that, properly used, technology can enhance the achievement of all students, increase families’ involvement in their children’s schooling, improve teachers’ skills and knowledge, and improve school administration and management."


TechKnowLogia --- http://www.techknowlogia.org/ 

TechKnowLogia is an international online journal that provides policy makers, strategists, practitioners and technologists at the local, national and global levels with a strategic forum to:

Explore the vital role of different information technologies (print, audio, visual and digital) in the development of human and knowledge capital;
Share policies, strategies, experiences and tools in harnessing technologies for knowledge dissemination, effective learning, and efficient education services;
Review the latest systems and products of technologies of today, and peek into the world of tomorrow; and
Exchange information about resources, knowledge networks and centers of expertise.

Bob Jensen's threads on education technologies are at http://www.trinity.edu/rjensen/000aaa/0000start.htm


Reading on Line (for the K12 Teachers and Students) --- http://www.readingonline.org/ 

Reading Online (ROL) is a peer-reviewed journal of the International Reading Association (IRA). Since its launch in May 1997 it has become a leading online source of information for the worldwide literacy-education community, with tens of thousands of accesses to the site each month.

The journal focuses on literacy practice and research in classrooms serving students aged 5 to 18. “Literacy” is broadly defined to include traditional print literacy, as well as visual literacy, critical literacy, media literacy, digital literacy, and so on. A special mission of the journal is to support professionals as they integrate technology in the classroom, preparing students for a future in which literacy’s meaning will continue to evolve and expand.

The journal is guided by an editorial council whose members adjudicate manuscripts submitted for peer review. In addition to articles, ROL includes invited features, online versions of content from IRA’s peer-reviewed print journals, and reports and other documents of interest to the worldwide literacy education community.


Important Distance Education Site
The Sloan Consortium --- http://www.aln.org/
The purpose of the Sloan Consortium (Sloan-C) is to help learning organizations continually improve quality, scale, and breadth according to their own distinctive missions, so that education will become a part of everyday life, accessible and affordable for anyone, anywhere, at any time, in a wide variety of disciplines.


From Syllabus News on October 14, 2003

Online University Consortium Releases Learner Assessment Tool

A network of universities founded to help companies and employees secure a quality online education, announced a Web-based assessment tool for prospective students considering online degree programs. The Online Learner Assessment, unveiled by the Online University Consortium, helps students determine their aptitude for online education in order to choose the best source for their individual learning style. The tool helps Online UC to match learners with qualified degree programs.

"The tool helps learners avoid costly mistakes by making the best education choice for their individual needs," said Greg Eisenbarth, Online UC's executive director. "This allows targeted development and enhances ROI for corporations funding employee training."

Read more: http://info.101com.com/default.asp?id=3157 



Thinking About Assessment:  Assessment is education's new apple-pie issue. Unfortunately, the devil is in the details, by Kenneth C. Green - August 2001 --- http://www.convergemag.com/magazine/story.phtml?id=3030000000002596 

Assessment has become the big thing. President Clinton supported assessment. President Bush supports assessment. It seems like every member of Congress favors assessment. So too, it seems, do all the nation's governors, and almost every elected state and local official -- school board members, city council members, mayors, city attorneys, sheriffs, county commissioners, park commissioners, and more.

The CEOs of major U.S. companies want more assessment. Moreover, many school superintendents, like Education Secre tary Rod Paige, former superintendent of the Houston Independent School District, also support assessment.

Assessment is education's new apple pie issue. Everyone supports efforts to improve education; and everyone seems to believe more assessment will help improve education.

It's just grand that many people in so many elected and administrative offices support assessment.

There is, however, one little problem: getting all these individuals to agree on how and what to assess and how to use the data. They all agree about the need for more assessment. Unfortunately, the devil is in the details.

It may be a stretch, but I see some striking similarities in the public conversation about technology and assessment.

First, well-informed folks -- some in education, some not -- believe that more assessment will improve education. Similarly, many people -- some who are educators and many others who simply care about education -- believe that more technology will improve education.

Second, assessment costs lots of money. One dimension of the discussion underway in Congress and in state capitols involves how much money to spend on assessment. Similarly, one dimension of the continuing conversation about technology in schools and colleges is about the costs.

Third, it seems like everyone has strong opinions about assessment. Moreover, anyone with an opinion becomes an immediate expert. Similarly, it seems like everyone has strong opinions about technology. Moreover, like opinions about assessment, anyone with an opinion about technology believes it is an expert opinion. In an interesting and important twist on Cartesian logic, we are all sum ergo experts on both assessment and technology.

Finally, as an acknowledged sum ergo expert, let me suggest an additional similarity: Those who profess great faith in the power of assessment or technology to enhance education may be engaged in just that -- an act of faith!

Wait, please. Let me explain. I believe in assessment. I believe in technology. But I also believe in research. And while I know a little less about the assessment literature and a little more about the technology literature, I do know enough about both to know that the research literature in both areas is often ambiguous.

Indeed, advocates for both assessment and for technology often have to confront the "no significant differences" question. For those of you who missed statistics in college, this means that at the end of the day, does the treatment (the intervention) generate a statistically significant difference in outcomes or performance?

Here, the hard questions are about learning outcomes. Let's frame the questions as hypotheses in a doctoral dissertation:

H1: Assessment contributes to enhanced learning outcomes for individual students.

H2: Assessment contributes to the enhanced performance of schools and colleges.

H3: Technology contributes to enhanced learning outcomes for individual students.

H4: Technology contributes to the enhanced performance of schools and colleges.

You may take issue with the academic presentation. However, in the context of the public discussions, as well as public policy and educational planning, these are the core issues: Do assessment and technology contribute to enhanced student learning and to the enhanced performance of schools and colleges?

Alas, we don't really know. We think we know. We draw on personal experience as hard data. We accept anecdote and testimonial as evidence of impacts. But the hard research evidence remains elusive; the aggregated research is ambiguous.

Indeed, it may well be a good (and obvious) "intervention," as suggested by President Bush and others, to conduct annual "reading and math assessments [to] provide parents with the information they need, to know how well their child is doing in school, and how well the school is educating their child." But we really do not know if this will make a difference in educational experiences of students or the effectiveness of individual schools.

Also see http://www.campuscomputing.net/ 


Controversies Regarding Pedagogy

"No Lectures or Teachers, Just Software," by Joshua Green, The New York Times, August 10, 2001 --- http://www.nytimes.com/library/tech/00/08/circuits/articles/10prof.html

The aim is to get students to delve into a course's volumes of academic information, including hours of videotape of experts in a field related to the program. Students running Krasnovia, for example, can draw on video advice from Thomas Boyatt, a former ambassador, and Bruce Laingen, an American diplomat who was held hostage in Iran and is president of the American Academy of Diplomacy.

Rather than subject students to full-blown lectures, Dr. Schank breaks the video into snippets that address only the question at hand. He believes students learn more effectively through this piecemeal approach, which he calls "just in time" learning.

"The value of the computer is that it allows kids to learn by doing," he said. "People don't learn by being talked at. They learn when they attempt to do something and fail. Learning happens when they try to figure out why."

Bald, bearded and powerfully built, Dr. Schank's appearance and demeanor suggest Marlon Brando in the movie "Apocalypse Now." His professional reputation is somewhat similar. His brusque manner and outspoken criticism of those he disagrees with have alienated some colleagues and earned him the reputation of iconoclast. But his success in designing teaching software has made him a much sought after figure among businesses, military clients and universities.

His company puts extraordinary effort into creating software courses, each of which can take up to a year to design and can cost up to $1 million. Video is an important component of Dr. Schank's program. After interviewing professors, his staff develops a story, writes a script, hires professional actors and begins filming. Cognitive Arts even arranged the use of CNN footage of the Bosnian conflict to lend the aura of authenticity to Crisis in Krasnovia.

The programs allow students to progress at their own pace. Dr. Schank says the semester system is badly outdated, a view he also holds for most tests, which foster only temporary memorization, he says. His programs require students to write detailed reports on what they have learned. A student who cuts corners does not finish the course, and the failing grade is delivered in the spirit of a video game. In Krasnovia, for instance, an incomplete report would draw a mock newscast in which commentators ridicule the president's address. Students must then go back and improve their work.

These multimedia simulations differ radically from current online offerings. "When you look at online courses now, what do you see?" Dr. Schank said. "Text online with a quiz. We're not taking a lecture and putting it on screen. We're restructuring these courses into goal-based scenarios that will get kids excited."

Dr. Schank says that such courses will render traditional classes -- and many professors -- obsolete. "The idea of one professor for one class is ancient," he said. "New technology is going to give every student access to the best professors in the world."

But many academics dismiss Dr. Schank's prediction that traditional teaching methods will soon become obsolete and question software learning's pedagogic value. "Education depends on relationships between people," said David F. Noble, a history professor at York University in Toronto and a critic of online learning. "Interactive is not the same as interpersonal. What Schank doesn't recognize is that teaching is not just about relaying knowledge."

Others warn against accepting radical new technology without pause. "The American university system is a highly functional institution," said Phil Agre, an associate professor of information studies at the University of California at Los Angeles. "The danger is that we will apply overly simplistic ideas about technology and tear apart the institution before we really know what we're doing."

Related evidence on impact of removing lectures from course is found in the BAM project described at http://www.trinity.edu/rjensen/265wp.htm 


October 8, 2003 message from Laurie Padgett [padgett8@BELLSOUTH.NET

Lauretta,

Yes it was live chat (synchronous) using voice (which also had a text chat box). In s particular class we would meet every other week in the evening around 7/8. I think they lasted 1 hr to 1 1/2 hr (I can not recall exactly). I took two classes a semester so I would attend two live chats for every two weeks. The instructors would coordinate to ensure they would not plan the class for the same evening. In addition to the live chat, we also used another program that I just can not remember the name of it (I think it might have been called Placeware). It was really neat because it looked like an auditorium and you were a little character (or may I say a colored dot). You could raise your hand, ask a question, type text, etc. We would use the chat program where he would talk as he conducted the presentation in the other program. If you had a question you would raise your hand & then use the live chat to talk. The program was starting to get more advanced as I graduated.

The Master's of Accounting program that I went through (as I understand it from the professor I had) was one of the first to go online for this particular program. I was in the first graduating class which started April of 2000 and completed September 2001. I attended Nova Southeastern University in Florida. ( http://emacc.huizenga.nova.edu/ )

I know that some feel that live chat (synchronous) might not work due to time zones and some feel that the text works just as well. From my personal experience and opinion I feel that a Master's program in "Accounting" needs more than just text written but interaction between your fellow classmates too. I feel it was more productive because it is like you are sitting in a class listening to the instructor and you have the opportunity to ask a question by typing in the box & then the instructor sees it & answers it with his voice. Additionally, you cover much more subject area than you can with a text chat. It really worked well.

Again, these are my opinions and each person has his own. This is what makes us unique.

Laurie

-----Original Message----- 
Subject: Re: peer evaluation of a web-based course

Laurie:

When you say "live" chat, are you referring to the chats in which all students come together at the same time (synchronous)? I tried to initiate this type of chat in my online class and found students's schedules to be an issue.

Has anyone tried putting students into groups to do synchronous chatting about assignments? How did this work for your class?

Lauretta A. Cooper, MBA, CPA 
Delaware Technical & Community College Terry Campus

 


"Seven Principles of Effective Teaching: A Practical Lens for Evaluating Online Courses" 
by Charles Graham, Kursat Cagiltay, Byung-Ro Lim, Joni Craner and Thomas M. Duffy
Assessment, March/April 2001 --- http://horizon.unc.edu/TS/default.asp?show=article&id=839 
Reproduced below with permission.

The "Seven Principles for Good Practice in Undergraduate Education," originally published in the AAHE Bulletin (Chickering & Gamson, 1987), are a popular framework for evaluating teaching in traditional, face-to-face courses. The principles are based on 50 years of higher education research (Chickering & Reisser, 1993). A faculty inventory (Johnson Foundation, "Faculty," 1989) and an institutional inventory (Johnson Foundation, "Institutional," 1989) based on these principles have helped faculty members and higher-education institutions examine and improve their teaching practices.

We, a team of five evaluators from Indiana University's Center for Research on Learning and Technology (CRLT), recently used these principles to evaluate four online courses in a professional school at a large Midwestern university. (The authors are required to keep the identity of that university confidential.—Ed.) The courses were taught by faculty members who also taught face-to-face courses. Conducted at the joint request of faculty and administration, the evaluations were based on analysis of online course materials, student and instructor discussion-forum postings, and faculty interviews. Although we were not permitted to conduct student interviews (which would have enriched the findings), we gained an understanding of student experiences by reading postings to the discussion forum.

Taking the perspective of a student enrolled in the course, we began by identifying examples of each of Chickering and Gamson's seven principles. What we developed was a list of "lessons learned" for online instruction that correspond to the original seven principles. Since this project involved practical evaluations for a particular client, they should not be used to develop a set of global guidelines. And since our research was limited in scope and was more qualitative than quantitative, the evaluations should not be considered a rigorous research project. Their value is to provide four case studies as a stimulus for further thought and research in this direction.

Principle 1: Good Practice Encourages Student-Faculty Contact

Lesson for online instruction: Instructors should provide clear guidelines for interaction with students.

Instructors wanted to be accessible to online students but were apprehensive about being overwhelmed with e-mail messages or bulletin board postings. They feared that if they failed to respond quickly, students would feel ignored. To address this, we recommend that student expectations and faculty concerns be mediated by developing guidelines for student-instructor interactions. These guidelines would do the following:

Principle 2: Good Practice Encourages Cooperation Among Students

Lesson for online instruction: Well-designed discussion assignments facilitate meaningful cooperation among students.

In our research, we found that instructors often required only "participation" in the weekly class discussion forum. As a result, discussion often had no clear focus. For example, one course required each of four students in a group to summarize a reading chapter individually and discuss which summary should be submitted. The communication within the group was shallow. Because the postings were summaries of the same reading, there were no substantive differences to debate, so that discussions often focused on who wrote the most eloquent summary.

At the CRLT, we have developed guidelines for creating effective asynchronous discussions, based on substantial experience with faculty members teaching online. In the study, we applied these guidelines as recommendations to encourage meaningful participation in asynchronous online discussions. We recommended the following:

Principle 3: Good Practice Encourages Active Learning

Lesson for online instruction: Students should present course projects.

Projects are often an important part of face-to-face courses. Students learn valuable skills from presenting their projects and are often motivated to perform at a higher level. Students also learn a great deal from seeing and discussing their peers' work.

While formal synchronous presentations may not be practical online, instructors can still provide opportunities for projects to be shared and discussed asynchronously. Of the online courses we evaluated, only one required students to present their work to the class. In this course, students presented case study solutions via the class Web site. The other students critiqued the solution and made further comments about the case. After all students had responded, the case presenter updated and reposted his or her solution, including new insights or conclusions gained from classmates. Only at the end of all presentations did the instructor provide an overall reaction to the cases and specifically comment about issues the class identified or failed to identify. In this way, students learned from one another as well as from the instructor.

Principle 4: Good Practice Gives Prompt Feedback

Lesson for online instruction: Instructors need to provide two types of feedback: information feedback and acknowledgment feedback.

We found during the evaluation that there were two kinds of feedback provided by online instructors: "information feedback" and "acknowledgement feedback." Information feedback provides information or evaluation, such as an answer to a question, or an assignment grade and comments. Acknowledgement feedback confirms that some event has occurred. For example, the instructor may send an e-mail acknowledging that he or she has received a question or assignment and will respond soon.

We found that instructors gave prompt information feedback at the beginning of the semester, but as the semester progressed and instructors became busier, the frequency of responses decreased, and the response time increased. In some cases, students got feedback on postings after the discussion had already moved on to other topics. Clearly, the ideal is for instructors to give detailed personal feedback to each student. However, when time constraints increase during the semester's busiest times, instructors can still give prompt feedback on discussion assignments by responding to the class as a whole instead of to each individual student. In this way, instructors can address patterns and trends in the discussion without being overwhelmed by the amount of feedback to be given.

Similarly, we found that instructors rarely provided acknowledgement feedback, generally doing so only when they were behind and wanted to inform students that assignments would be graded soon. Neglecting acknowledgement feedback in online courses is common, because such feedback involves purposeful effort. In a face-to-face course, acknowledgement feedback is usually implicit. Eye contact, for example, indicates that the instructor has heard a student's comments; seeing a completed assignment in the instructor's hands confirms receipt.

Principle 5: Good Practice Emphasizes Time on Task

Lesson for online instruction: Online courses need deadlines.

One course we evaluated allowed students to work at their own pace throughout the semester, without intermediate deadlines. The rationale was that many students needed flexibility because of full-time jobs. However, regularly-distributed deadlines encourage students to spend time on tasks and help students with busy schedules avoid procrastination. They also provide a context for regular contact with the instructor and peers.

Principle 6: Good Practice Communicates High Expectations

Lesson for online instruction: Challenging tasks, sample cases, and praise for quality work communicate high expectations.

Communicating high expectations for student performance is essential. One way for instructors to do this is to give challenging assignments. In the study, one instructor assigned tasks requiring students to apply theories to real-world situations rather than remember facts or concepts. This case-based approach involved real-world problems with authentic data gathered from real-world situations.

Another way to communicate high expectations is to provide examples or models for students to follow, along with comments explaining why the examples are good. One instructor provided examples of student work from a previous semester as models for current students and included comments to illustrate how the examples met her expectations. In another course, the instructor provided examples of the types of interactions she expected from the discussion forum. One example was an exemplary posting while the other two were examples of what not to do, highlighting trends from the past that she wanted students to avoid.

Finally, publicly praising exemplary work communicates high expectations. Instructors do this by calling attention to insightful or well-presented student postings.

Principle 7: Good Practice Respects Diverse Talents and Ways of Learning

Lesson for online instruction: Allowing students to choose project topics incorporates diverse views into online courses.

In several of the courses we evaluated, students shaped their own coursework by choosing project topics according to a set of guidelines. One instructor gave a discussion assignment in which students researched, presented, and defended a current policy issue in the field. The instructor allowed students to research their own issue of interest, instead of assigning particular issues. As instructors give students a voice in selecting their own topics for course projects, they encourage students to express their own diverse points of view. Instructors can provide guidelines to help students select topics relevant to the course while still allowing students to share their unique perspectives.

Conclusion

The "Seven Principles of Good Practice in Undergraduate Education" served as a practical lens for our team to evaluate four online courses in an accredited program at a major U.S. university. Using the seven principles as a general framework for the evaluation gave us insights into important aspects of online teaching and learning.

A comprehensive report of the evaluation findings is available in a CRLT technical report (Graham, et al., 2000). 

References

Chickering, A., & Gamson, Z. (1987). Seven principles of good practice in undergraduate education. AAHE Bulletin, 39, 3-7.

Chickering, A., & Reisser, L. (1993). Education and identity. San Francisco: Jossey-Bass.

Graham, C., Cagiltay, K., Craner, J., Lim, B., & Duffy, T. M. (2000). Teaching in a Web-based distance learning environment: An evaluation summary based on four courses. Center for Research on Learning and Technology Technical Report No. 13-00. Indiana University Bloomington. Retrieved September 18, 2000 from the World Wide Web: http://crlt.indiana.edu/publications/crlt00-13.pdf

Principles for good practice in undergraduate education: Faculty inventory. (1989). Racine, WI: The Johnson Foundation, Inc.

Principles for good practice in undergraduate education: Institutional inventory. (1989). Racine, WI: The Johnson Foundation, Inc.

A comprehensive report of the evaluation findings is available on the Web (in PDF format) at http://crlt.indiana.edu/publications/crlt00-13.pdf 


Teaching at an Internet Distance: the Pedagogy of Online Teaching and Learning The Report of a 1998-1999 University of Illinois Faculty Seminar --- http://www.vpaa.uillinois.edu/tid/report/tid_report.html 

In response to faculty concern about the implementation of technology for teaching, a year-long faculty seminar was convened during the 1998-99 academic year at the University of Illinois. The seminar consisted of 16 members from all three University of Illinois campuses (Chicago, Springfield, and Urbana-Champaign) and was evenly split, for the sake of scholarly integrity, between "skeptical" and "converted" faculty. The seminar focused almost entirely on pedagogy. It did not evaluate hardware or software, nor did it discuss how to provide access to online courses or how to keep them secure. Rather, the seminar sought to identify what made teaching to be good teaching, whether in the classroom or online. External speakers at the leading edge of this discussion also provided pro and con views.

The seminar concluded that online teaching and learning can be done with high quality if new approaches are employed which compensate for the limitations of technology, and if professors make the effort to create and maintain the human touch of attentiveness to their students. Online courses may be appropriate for both traditional and non-traditional students; they can be used in undergraduate education, continuing education, and in advanced degree programs. The seminar participants thought, however, that it would be inappropriate to provide an entire undergraduate degree program online. Participants concluded that the ongoing physical and even emotional interaction between teacher and students, and among students themselves, was an integral part of a university education.

Because high quality online teaching is time and labor intensive, it is not likely to be the income source envisioned by some administrators. Teaching the same number of students online at the same level of quality as in the classroom requires more time and money.

From our fundamental considerations of pedagogy we have prepared a list of practice-oriented considerations for professors who might be interested in teaching online, and another list for administrators considering expanding online course offerings.

Practical Considerations for Faculty:

Whom do I teach? (Sections 2,3) The fraction of "nontraditional" students is not as high as some make it out to be, but is still significant. Stemming from the baby boomlet, the number of young, "traditional" students will be as high or higher than ever through the next decade. Many contexts of online course delivery given in Table 5, for professional training/continuing education, undergraduate education, and graduate education for both traditional and nontraditional students, are viable. There are several exceptions: first, certain types of advanced graduate work cannot be performed online, and second, traditional students benefit from the maturing, socializing component of an undergraduate college education and this requires an on-campus presence.

How do I teach? (Sections 4,5) Attempts are being made to use instructional technology such as real-time two-way videoconferencing in attempts to simulate the traditional classroom. With improvements in technology this mode may yet succeed, but from what we have seen, the leaders in this area recommend shifts from "traditional" teaching paradigms. Two new online paradigms that appear to work well are text-based computer mediated communication (CMC) for courses that are traditionally taught in the discussion or seminar mode, and interactive, graphically based material for courses that are traditionally taught in the lecture mode. Methods are by no means limited to these two.

How many do I teach? (Section 5) High quality teaching online requires smaller student/faculty ratios. The shift from the classroom to online has been described as a shift from "efficiency to quality." We also believe a motivational human touch must come into play as well in the online environment as it does in the classroom. Students should feel they are members of a learning community and derive motivation to engage in the material at hand from the attentiveness of the instructor.

How do I ensure high quality of online teaching? (Sections 2, 6, 7) Quality is best assured when ownership of developed materials remains in the hands of faculty members. The University of Illinois' Intellectual Property Subcommittee Report on Courseware Development and Distribution recommends that written agreement between the courseware creator and the administration be made in advance of any work performed. Evaluation of learning effectiveness is also a means to ensure high quality. We suggest a broad array of evaluation areas that includes, but is not limited to, a comparison of learning competence with the traditional classroom.

Policy Issues for Administrators

How do I determine the worth of teaching technology? (Sections 1, 2) On any issue involving pedagogy, faculty members committed to teaching should have the first and last say. On the other hand, faculty must be held responsible for good teaching. Online courses should not be motivated by poor instructor performance in large classes.

How do I encourage faculty to implement technology in their teaching? (Section 7) Teaching innovation should be expected, respected, and rewarded as an important scholarly activity. At the same time, not all classes are amenable to online delivery.

To ensure the quality of a course, it is essential that knowledgeable, committed faculty members continue to have responsibility for course content and delivery. Therefore, intellectual property policies should allow for faculty ownership of online courseware. The commissioning of courses from temporary instructors should be avoided, and the university should be wary of partnerships with education providers in which faculty members have commercial interests.

Will I make money with online teaching? (Sections 3, 5) The scenario of hundreds or thousands of students enrolling in a well developed, essentially instructor-free online course does not appear realistic, and efforts to do so will result in wasted time, effort, and expense. With rare exceptions, the successful online courses we have seen feature low student to faculty ratios. Those rare exceptions involve extraordinary amounts of the professor's time. And besides the initial investment in the technology, technical support for professors and students and maintenance of hardware and software are quite expensive.

Online teaching has been said to be a shift from "efficiency" to "quality," and quality usually doesn't come cheaply. Sound online instruction is not likely to cost less than traditional instruction. On the other hand, some students may be willing to pay more for the flexibility and perhaps better instruction of high quality online courses. This is the case for a growing number of graduate level business-related schools. However, it is likely that a high number of "traditional" students, including the baby boomlet, will continue to want to pay for a directly attentive professor and the on-campus social experience.

How do I determine if online teaching is successful? (Sections 5, 6) In the short term, before history answers this question, we think that a rigorous comparison of learning competence with traditional classrooms can and should be done. High quality online teaching is not just a matter of transferring class notes or a videotaped lecture to the Internet; new paradigms of content delivery are needed. Particular features to look for in new courses are the strength of professor-student and student-student interactions, the depth at which students engage in the material, and the professor's and students' access to technical support. Evidence of academic maturity, such as critical thinking and synthesis of different areas of knowledge should be present in more extensive online programs.

For the complete report, go to http://www.vpaa.uillinois.edu/tid/report/tid_report.html 


SOME HELPFUL LINKS

The SCALE experiments at the University of Illinois.  You can find a review and the links at http://www.trinity.edu/rjensen/255wp.htm 

The LEAD program at the University of Wisconsin.  See http://www.trinity.edu/rjensen/book01q1.htm#020901 

The Clipper Project at Lehigh University.  See  http://clipper.lehigh.edu/ 

Download Dan Stone's audio and presentation files from http://www.cs.trinity.edu/~rjensen/000cpe/00start.htm 

Evaluating Online Educational Materials for Use in Instruction (tremendous links) --- http://www.ed.gov/databases/ERIC_Digests/ed430564.html 

Do you recall the praise that I lavished on the ethics website of a Carnegie-Mellon University Philosophy Professor named Robert Cavalier in my March 22, 000 edition of New Bookmarks?  See http://www.trinity.edu/rjensen/book00q1.htm#032200 

Robert Cavalier now has an article entitled "Cases, Narratives, and Interactive Multimedia," in Syllabus, May 2000. pp. 20-22.  The online version of the Syllabus article is not yet posted, but will eventually be available at http://www.syllabus.com/ 

The purpose of our evaluation of A Right to Die?  The Case of Dax Cowart was to see if learning outcomes for case studies could be enhanced with the use of interactive multimedia.  My Introduction to Ethics class was divided into three groups:  Text, Film, and CD-ROM.  Equal distribution was achieved by using student scores on previous exams plus their Verbal SAT scores.

Two graders were trained and achieved more than 90 percent in grader variabilility.  The results of the students' performance were put through statistical analysis and the null hypothesis was rejected for the CD/Film and CD/Text groups.  Significant statistical difference was demonstrated in favor of interactive multimedia.

 
Microsoft in Higher Education - Case Studies
Internet Connections
The Web of Asynchronous Learning Networks
Asynchronous Learning Magazine
Case Studies In Science Education
State of Change: Images of Science Education Reform in Iowa
Wisconsin Center for Education Research
The Internet and Distance Learning in Accounting Education—IFAC
Good links to education sites http://www.teleport.com/~hadid/bookmark_page.html
US News Online Comparisons of Programs in Higher Education
ERIC #E530
Education Review: A Journal of Book Reviews
Assessment and Accountability Program
The "No Significant Difference" Phenomenon (education technology, history)
Bibliography on Evaluating Internet Resources
Assessing Child Behavior and Learning Abilities
Case-Based Reasoning in the Web
CLAC 1998 Annual Conference at Trinity University
Howard Gardner: Seven Types of Intelligence
Welcome to the ETS Net
net.wars / contents (top site from Mike Kearl)
Heinemann Internet Help Subject Guide (Help in Using Search Engines)
Index of infobits/text/
Margaret Fryatt's Home Page at OISE
FIU Student Evaluations of Courses
The University of Western Ontario Student Evaluations
Meeting the Training Challenge
Net Search
Network-Based Electronic Publishing of Scholarly Works: A Selective Bibliography
Real Problems in a Virtual World
Seeing Through Computers, Sherry Turkle, The American Prospect
Technology for Creating a Paperless Course
Technology Review Home Page
Technology Review Home Page
The Center for Educational Technology Program
The Distance Educator
The World Lecture Hall
The World-Wide Web Virtual Library: Educational Technology (21-May-1996)
TR: October '96: Brody
Math Forum: Bibliography - Alternative Instruction/Assessment
Pathways to School Improvement
Education World (tm) Where Educators Go To Learn

The Theory Into Practice Database http://www.gwu.edu/~tip/index.html

This is a really useful education technology and education assessment search engine
http://socserv2.mcmaster.ca/srnet/evnet.htm  


The word "metacognition" arises once again.

"Assessing the Impact of Instructional Technology on Student Achievement," by Lorraine Sherry, Shelley Billig, Daniel Jesse, and Deborah Watson-Acosta, T.H.E. Journal, February 2001, pp. 40-43 --- http://www.thejournal.com/magazine/vault/A3297.cfm 

Four separate simplified path analysis models were tested. The first pair addressed process and product outcomes for class motivation, and the second pair addressed school motivation. The statistically significant (p < .05) results were as follows:

Clearly, correlation does not imply causality. However, when each of these elements was considered as an independent variable, there was a corresponding change in associated dependent variables. For example, there was a significant correlation between motivation and metacognition, indicating that students' enthusiasm for learning with technology may stimulate students' metacognitive (strategic) thinking processes. The significant correlations between motivation, metacognition, inquiry learning, and the student learning process score indicate that motivation may drive increases in the four elements connected by the first path. Similarly, the significant correlations between motivation, metacognition, application of skills, and the student product score indicate that motivation may drive increases in the four elements connected by the second path.

Based on the significant correlations of the two teacher measurements of student achievement with the student survey data, these data validated the evaluation team's extension of the Developing Expertise model to explain increases in student performance as a result of engaging in technology-supported learning activities. Moreover, nearly all students across the project met the standards for both the teacher-created student product assessment and the learning process assessment. This indicates that, in general, the project had a positive impact on student achievement.

Conclusions

These preliminary findings suggest that teachers should emphasize the use of metacognitive skills, application of skills, and inquiry learning as they infuse technology into their respective academic content areas. Moreover, these activities are directly in line with the Vermont Reasoning and Problem Solving Standards, and with similar standards in other states. The ISTE/NETS standards for assessment and evaluation also suggest that teachers:

Rockman (1998) suggests that "A clear assessment strategy that goes beyond standardized tests enables school leaders, policymakers, and the community to understand the impact of technology on teaching and learning." RMC Research Corporation's extension of the Sternberg model can be used to organize and interpret a variety of student self-perceptions, teacher observations of student learning processes, and teacher-scored student products. It captures the overlapping kinds of expertise that students developed throughout their technology-related activities.

One of the greatest challenges facing the Technology Innovation Challenge Grants and the Preparing Tomorrow's Teachers To Use Technology (PT3) grants is to make a link between educational technology innovations, promising practices for teaching and learning with technology, and increases in student achievement. We believe that this model may be replicable in other educational institutions, including schools, districts, institutions of higher learning, and grant-funded initiatives. However, to use this model, participating teachers must be able to clearly identify the standards they are addressing in their instruction, articulate the specific knowledge and skills that are to be fostered by using technology, carefully observe student behavior in creating and refining their work, and create and benchmark rubrics that they intend to use to evaluate student work.

The word "metacognition" also appears at http://www.trinity.edu/rjensen/265wp.htm 


LEAD and SCALE for Evaluation and Assessment of Asynchronous Learning 
As summarized in my February 9, 2001 edition of New Bookmarks --- http://www.trinity.edu/rjensen/book01q1.htm#020901 

The feature of the week is evaluation and assessment of asynchronous learning network (ALN) courses and technology-aided course materials.  The featured sites are the following:


 

The ADEPT Program in the School of Engineering at Stanford University made the world take notice that all prestigious universities were not going to take the high road in favor of onsite education with a haughty air of arrogance that their missions were not to deliver distance education courses.  Other prestigious universities such as Columbia University, Duke University,  and the London School of Economics certainly took notice following the immediate success of Stanford's ADEPT Program for delivering a prestigious online Masters of Engineering degree to off-campus students.

Stanford, through Stanford Online, is the first university to incorporate video with audio, text, and graphics in its distance learning offerings. Stanford Online also allows students to ask questions or otherwise interact with the instructor, teaching assistant, and/or other students asynchronously from their desktop computer. Stanford Online is credited by many sources as a significant contributor to the growth of Silicon Valley, and to the competitive technical advantage of companies that participate in continuing education through distance learning.

Learn More about Stanford Online

Some distance education courses such as the ADEPT Program at Stanford University are almost entirely asynchronous with neither face-to-face onsite classes nor online virtual classes.  Others like Duke's Global Executive MBA program are mostly synchronous in online virtual classes and occasional onsite classes and field trips.

You can read the following about asynchronous learning in the ADEPT program as reported at http://ww.stanford.edu/history/finalreport.html 

Conclusions

In our project proposal, we stated that there were several potential benefits to the use of asynchronous techniques in education. These included increased course access for students, increased quality of the educational experience, and lower costs.

Our experience to date mirrors that of others in that it clearly demonstrates the value of increased access. This includes not only students who had no access previously, but also students who used ADEPT to review material previously accessed by other methods and to enable a certain amount of schedule flexibility. At the same time, the evidence from our project suggests that increased access may not be sufficient, by itself, to justify the cost of providing asynchronous courses to those with other options. This conclusion is, of course, restricted to our particular student body which is composed of high-performing graduate students in technical disciplines who are fortunate enough in most cases to have a variety of options for accessing educational material.

Results from our project suggest that to raise the quality of the educational experience, significant changes in pedagogy will be necessary. Our belief is that the key to this is to find ways to exploit the ability of the technologies to provide a more flexible learning experience. The flexibility of time-on-task provided by asynchronous techniques is obvious. However, other dimensions of flexibility might include flexibility of media (text vs. graphics vs. audio/video for example) as well as flexibility of course content. For many courses, there is more than one acceptable set of content and more than one acceptable sequencing of content as well. Asynchronously delivered material in multimedia format has the potential of providing a customized, possibly even unique, educational experience to each student based on his or her educational goals, background, and experience. Currently however, we would argue that no one knows how to do this well.

The issue of cost is most problematic. As mentioned above, there is an expectation that asynchronously delivered courses will be less costly than synchronously delivered ones. To some extent this is a simple pricing issue. However, if we frame the issue as the need for the production, maintenance, and delivery costs of an asynchronous course to be less than that of either a live or televised class, we can make some observations. Our experience shows that the production and delivery costs of adequate quality multimedia content are high. In a situation such as that at Stanford, where classes are taught live and are also televised, asynchronous delivery is a direct cost overlay. Although live classes will continue into the foreseeable future, on-line synchronous delivery could supplant television should the quality of the two methods become comparable.

To deliver high-quality educational material content asynchronously, it is clear that reuse of material, tools to control content production and maintenance costs, and economies of scale will be the key determinants. These issues were beyond the scope of the present project. Again, we would argue that currently no one really knows how to best manage these determinants to hold down costs.

In closing, we note that there are now a great many successful deployments of asynchronous education and training, including entire asynchronous universities. The "technology deficit" which was mentioned repeatedly by students and which we have explored at length as part of this project, will work itself out over time. At this point, the most urgent need for innovation in asynchronous learning lies in the area of pedagogy and in the areas of large-scale content production, electronic organization, and delivery.

At Stanford, it is our intention to continue to offer asynchronous courses in the manner of this project. As was the case during the project, the courses offered will probably range from two to four per quarter (six to twelve per year). At the same time we hope to continue our track-record as innovators by shifting our emphasis toward exploring methods of increasing the quality of asynchronous education while at the same time reducing its cost.


I notice that David Noble does not devote much attention to successful (and highly profitable) online programs such as Stanford's ADEPT and Duke's online Global MBA programs.  That plus Noble's bad spelling and sloppy grammar make me wonder how carefully crafted his "research" stands up to rigorous standards for due care and freedom from bias.  He does, however, raise some points worth noting.  Links to his defiance of distance education at http://www.trinity.edu/rjensen/000aaa/theworry.htm#DavidNoble 

 

There are other legitimate concerns.  See http://www.trinity.edu/rjensen/000aaa/theworry.htm 

 


The Clipper Project at Lehigh University is aimed at learning assessment (named after the Pan Am Clipper that "did more than herald a historic shift in the way goods and people were transported. Indeed, it forced new ways of thinking about how we work and live. The expansion of inexpensive air travel brought about a societal transformation."

 

"Sink or Swim?  Higher Education Online:  How Do We Know What works --- And What Doesn't?" by Gregory Farrington and Stephen Bronack, T.H.E. Journal, May 2001, pp. 70-76 --- http://www.thejournal.com/magazine/vault/A3484.cfm 

 

Last spring, the chairman of the House of Representatives science subcommittee on basic research expressed concern about the quality of online college courses. He suggested that students who take courses online may not interact as much as their peers in traditional courses, and that they may walk away with knowledge but not with an understanding of how to think for themselves.

At a hearing designed to gauge how the federal government should respond to this trend, the former president of the University of Michigan, a distinguished MIT professor, and other experts touted several online advantages. Among their assertions were claims that student participation is higher in online courses, and that students have easier access to professors through e-mail.

The committee chairman remained skeptical and said he believed the National Science Foundation should help assess the quality of online education by improving the understanding of how the brain works and by figuring out how humans learn. Well, learning how the brain works is no simple proposition. While we wait for that day to come, there are a lot of insightful educational experiments that can be done to sort out the reality from the sizzle of online education. At Lehigh, we are spending a great deal of time these days doing just that. While arguments can be made both for and against online classes, few are backed by empirical research focused on actual teaching and learning behaviors. We agree strongly with the chairman’s call for high quality educational research.

Millions of dollars are spent each year on the development and delivery of online courses. Much of this funding comes from federal agencies like the Department of Education and the National Science Foundation, and a majority of the supported programs are indeed creating interesting, engaging courses. But how do we know they really work?

At best, one may find anecdotal accounts of successful online classes. Professors claim, “I did it in my class and it worked great!” or “the students noted on the end-of-course survey that they enjoyed the course; therefore it is good.” Occasionally, one may find reports that draw upon commonly shared theories, such as “having control over more of one’s own learning should produce better learners,” as proof of effectiveness. Such insights are valuable, but they don’t provide the kind of understanding needed to make truly informed decisions about the value of online education.

Jim DiPerna (co-director of The Clipper Project) and Rob Volpe conducted a review of research that produced nearly 250 potential articles concerning the evaluation of Web-based instruction over the past 10 years. However, after eliminating duplicate citations and irrelevant articles (i.e., articles merely describing a Web-based course, articles offering guidelines for designing a Web-based course, or articles explaining a particular Web-based technology), only a dozen articles existed. Of the 12, 11 were based solely on students’ self-reported attitudes or perceptions regarding Web-based instruction. Amazingly, only one directly assessed the impact of Web-based technology on student learning (as measured by randomly selected essay performance and letter grades) across subjects. DiPerna and Volpe presented a thorough review of their research at APA last August.

As more learning becomes digitized, we must analyze how socialization factors like communication skills and interaction with other students are best fostered. We must know which factors influence success. We must find out how technology affects the way faculty members teach and the way students learn, as well as how much it’s really going to cost to create and deliver this new form of education. The only way we can truly know these things is through observing the behaviors of students participating in digital learning.

At our university, we have just begun a multi-year initiative to investigate the short- and long-term effects of online classes. Aptly titled “The Clipper Project,” the initiative will provide a baseline for future research into the impact of Web-based courses on students and faculty.

For the rest of the article, go to http://www.thejournal.com/magazine/vault/A3484.cfm 

 

The main page of The Clipper Project is at http://clipper.lehigh.edu/ 

 

The Clipper Project is a research and development initiative – investigating the costs and benefits of offering Web-based University courses to high school seniors who participate in the project. High school students who are accepted in Lehigh University's early admissions program will be eligible to enroll in a Web-based version of one of Lehigh University’s introductory-level courses. Currently, Economics I and Calculus I are available through the Clipper Project. To learn more about each course, visit the Courses section.

Interested in the Clipper Project?

Please visit the sections of the Clipper Project website that interest you. If you have any questions please view our Frequently Asked Questions (FAQ's) sections, links to these sections are to the right. If you don’t find what you need, drop us an e-mail, and we’ll be happy to answer any questions you may have!


Accreditation Article in my February 9, 2001 Edition of New Bookmarks --- http://www.trinity.edu/rjensen/book01q1.htm#020901 

"Regional Accrediting Commissions:  The Watchdogs of Quality Assurance in Distance Education," by Charles Cook, Syllabus, February 2001, beginning on p. 20 and p. 56.  I think the article will one day be posted at http://www.syllabus.com/ 

"So, what's new?"  It's a question we are often asked as a kind of verbal handshake.  As the executive officer of a regional accrediting commission, these days I respond, "What isn't new?"

My rejoinder is suggestive of how technology-driven change has affected American higher education.  We now have e-learning, largely asynchronous instruction provided anytime/anywhere, expanding its reach.  Faculty roles have become unbundled and instructional programs disaggregated.  The campus portal is no longer made of stone or wrought iron, and through it students have access to virtual textbooks, laboratories, classrooms, and libraries, as well as an array of services, available 24 hours a day, seven days a week; indeed we now have wholly virtual universities.  Technology has made our institutions of higher learning, once like islands, increasingly dependent on external entities if they are to be effective.  Once pridefully autonomous, they now seek affiliations with organizations both within and without the academy to jointly offer programming online.

These new phenomena, unheard of five years ago, challenge the capacity of regional accreditation commissions to provide meaningful quality assurance of instructional programs offered by colleges and universities.  Simply put, many of the structures and conditions that led to accreditation's established assumptions about quality do not hold up in the virtual education environment.  The core challenge, of course, is to deal with new forms of delivery for instruction, resources, and services.  But beyond that, as with so many things,  the Net has provided unprecedented opportunities for colleges and students alike to package parts of or all an educational experience in new ways previously beyond contemplation.  Given circumstances, it's reasonable to ask, "How is accreditation responding?"

Balancing Accountability and Innovation
The eight regional commissions that provide quality assurance for the majority of degree-granting institutions in the United States are effectively taking action collectively and individually to address the new forms of education delivery.  Working within their existing criteria and processes, they are seeking at once to maintain and apply high standards while also recognizing that education can be provided effectively in a variety of ways.  However, regardless of the form of education delivery in use in higher education, the commissions are resolved to sustain certain values in accrediting colleges and universities:


Revenue and Accreditation Hurdles Facing Corporate Universities

One thing that just does not seem to work is a university commenced by a major publishing house.  McGraw-Hill World University was virtually stillborn at the date of birth as a degree-granting institution.  It evolved into McGraw-Hill Online Learning ( http://www.mhonlinelearning.com/  ) that does offer some interactive training materials, but the original concept of an online university ( having distance education courses for college credit) is dead and buried.  Powerful companies like Microsoft Corporation started up and then abandoned going it alone in establishing new online universities.

The last venturesome publishing company to start a university and fight to get it accredited is now giving up on the idea of having its own virtual university --- http://www.harcourthighered.com/index.html 
Harcourt Higher Education University was purchased by a huge publishing conglomerate called Thompson Learning See http://www.thomsonlearning.com/harcourt/ .  Thomson had high hopes, but soon faced the reality that it is probably impossible to compete with established universities in training and education markets.

The Thomson Corporation has announced that it will not continue to operate Harcourt Higher Education: An Online College as an independent degree-granting institution. Harcourt Higher Education will close on August 27, 2001. The closing is the result of a change of ownership, which occurred on July 13, 2001, when the Thomson Corporation purchased the online college from Harcourt General, Inc.

From Syllabus e-News on August 7, 2001

Online College to Close Doors

Harcourt Higher Education, which launched an online for-profit college in Massachusetts last year, is closing the school's virtual doors Sept. 28. Remaining students will have their credentials reviewed by the U.S. Open University, the American affiliate of the Open University in England.

We can only speculate as to the complex reasons why publishing companies start up degree-granting virtual universities and subsequently abandon efforts provide credit courses and degrees online.  

Enormous Revenue Shortfall (Forecast of 20,000 students in the first year;  Reality turned up 20 students)

"E-COLLEGES FLUNK OUT," By: Elisabeth Goodridge, Information Week, August 6, 2001, Page 10 

College students appear to prefer classroom instruction over online offerings.

Print and online media company Thomson Corp. said last week it plans to close its recently acquired, for-profit online university, Harcourt Higher Education.  Harcourt opened with much fanfare a year ago, projecting 20,000 enrollees within five years, but only 20 to 30 students have been attending.

Facing problems from accreditation to funding, online universities have been struggling mightily--in stark contrast to the success of the overall E-learning market.  A possible solution?  E-learning expert Elliott Masie predicts "more and more creative partnerships between traditional universities and online ones."

Roosters Guarding the Hen House
Publishing houses failed to gain accreditations.  I suspect that major reason is that the AACSB and other accrediting bodies have made it virtually impossible for corporations to obtain accreditation for startup learning corporations that are not partnered with established colleges and universities.  In the U.S., a handful of corporations have received regional accreditation (e.g., The University of Phoenix and Jones International Corporation), but these were established and had a history of granting degrees prior to seeking accreditation.  In business higher education, business corporations face a nearly impossible hurdle of achieving business school accreditation ( see http://businessmajors.about.com/library/weekly/aa050499.htm ) since respected accrediting bodies are totally controlled by the present educational institutions (usually established business school deans who behave like roosters guarding the hen house).  Special accrediting bodies for online programs have sprung up, but these have not achieved sufficient prestige vis-à-vis established accrediting bodies.  

Note the links to accreditation issues at http://www.degree.net/guides/accreditation.html )
Where GAAP means Generally Accepted Accreditation Principles)

All About Accreditation: A brief overview of what you really need to know about accreditation, including GAAP (Generally Accepted Accrediting Practices). Yes, there really are fake accrediting agencies, and yes some disreputable schools do lie. This simple set of rules tells how to sort out truth from fiction. (The acronym is, of course, borrowed from the field of accounting. GAAP standards are the highest to which accountants can be held, and we feel that accreditation should be viewed as equally serious.)

GAAP-Approved Accrediting Agencies: A listing of all recognized accrediting agencies, national, regional, and professional, with links that will allow you to check out schools.

Agencies Not Recognized Under GAAP: A list of agencies that have been claimed as accreditors by a number of schools, some totally phony, some well-intentioned but not recognized.

FAQs: Some simple questions and answers about accreditation and, especially, unaccredited schools.

For more details on accreditation and assessment, see http://www.trinity.edu/rjensen/assess.htm

Question:
Is lack of accreditation the main reason why corporate universities such as McGraw-Hill World University, Harcourt Higher Education University, Microsoft University, and other corporations have failed in their attempts to compete with established universities? 

Bob Jensen's Answer:
Although the minimum accreditation (necessary for transferring of credits to other colleges)  is a very important cause of failure  in the first few years of attempting to attract online students, it is not the main cause of failure.  Many (most) of the courses available online were training courses for which college credit transfer is not an issue.

  1. Why did the University of Wisconsin (U of W) swell with over 100,000 registered online students while Harcourt Higher Education University (HHWU) struggled to get 20 registered?

    Let me begin to answer my own question with two questions.  If you want to take an online training or education course from your house in Wisconsin's town of Appleton, would you prefer to pay more much more for the course from HHWU than a low-priced tuition for Wisconsin residents at the U of W.  If you were a resident of Algona, Iowa and the price was the same for the course whether you registered at HHWU or U of W, would you choose U of W?  My guess is that in both cases, students would choose U of W, because the University of Wisconsin has a long-term tradition for quality and is likely to be more easily recognized for quality on the students' transcripts.

  2. Why can the University of Wisconsin offer a much larger curriculum than corporate universities?

    The University of Wisconsin had a huge infrastructure for distance education long before the age of the Internet.  Televised distance education across the state has been in place for over 30 years.  Extension courses have been given around the entire State of Wisconsin for many decades.  The University of Wisconsin's information technology system is already in place at a cost of millions upon millions of dollars.  There are tremendous economies of scale for the University of Wisconsin to offer a huge online curriculum for training and education vis-à-vis a startup corporate university starting from virtually scratch.

  3. What target market feels more closely attached to the University of Wisconsin than some startup corporate university?

    The answer is obvious.  It's the enormous market comprised of alumni and families of alumni from every college and university in the University of Wisconsin system of state-supported schools.

  4. What if a famous business firm such as Microsoft Corporation or Accenture (formerly Andersen Consulting) elected to offer a prestigious combination of executive training and education to only upper-level management in major international corporations?  What are the problems in targeting to business executives?

    This target market is already carved out by alumni of elite schools such as Stanford, Harvard, Chicago, Carnegie-Mellon, Columbia, London School of Economics, Duke, University of Michigan, University of Texas, and the other universities repeatedly ranked among the top 50 business schools in the nation.  Business executives are more often than not snobs when it comes to universities in the peer set of "their" alma maters.  Logos of top universities are worth billions in the rising executive onsite and online training and education market.  UNext Corporation recognized this, and this is the reason why the its first major step in developing an online executive education program was to partner with five of the leading business schools in the world.


  5. Why does one corporate university, The University of Phoenix, prosper when others fail or limp along with costs exceeding revenues?  

    The University of Phoenix is the world's largest private university.  The reason for its success is largely due to a tradition of quality since 1976.  This does not mean that quality has always been high for every course over decades of operation, but each year this school seems to grow and offer better and better courses.  Since most of its revenues still come from onsite courses, it is not clear that the school would prosper if it became solely an online university.  The school is probably further along on the learning curve than most other schools in terms of adult learners.  It offers a large number of very dedicated and experienced full-time and part-time faculty.  It understands the importance of small classes and close communications between students and other students and instructors.  It seems to fill a niche that traditional colleges and universities have overlooked.

You can read more about these happenings at http://www.trinity.edu/rjensen/000aaa/0000start.htm 
Especially note the prestigious universities going online at http://www.trinity.edu/rjensen/crossborder.htm 

 



EVALUATION OF LEARNING TECHNOLOGY --- http://ifets.ieee.org/periodical/vol_4_2000/v_4_2000.html  

"An introduction to the Evaluation of Learning Technology" 
http://ifets.ieee.org/periodical/vol_4_2000/intro.html
 
Martin Oliver
Higher Education Research and Development Unit University College London, 1-19 Torrington Place London, WC1E 6BT, England Tel: +44 20 7679 1905 martin.oliver@ucl.ac.uk 

Evaluation can be characterised as the process by which people make judgements about value and worth; however, in the context of learning technology, this judgement process is complex and often controversial. This article provides a context for analysing these complexities by summarising important debates from the wider evaluation community. These are then related to the context of learning technology, resulting in the identification of a range of specific issues. These include the paradigm debate, the move from expert-based to practitioner-based evaluation, attempts to provide tools to support practitioner-led evaluation, authenticity, the problem of defining and measuring costs, the role of checklists, the influence of the quality agenda on evaluation and the way in which the process of evaluation is itself affected by the use of learning technology. Finally, these issues are drawn together in order to produce an agenda for further research in this area.

"Mapping the Territory: issues in evaluating large-scale learning technology initiatives" 
 http://ifets.ieee.org/periodical/vol_4_2000/anderson.html 
Charles Anderson, Kate Day, Jeff Haywood, Ray Land and Hamish Macleod
Department of Higher and Further Education University of Edinburgh, 
Paterson's Land Holyrood Road, Edinburgh EH8 8AQ

This article details the challenges that the authors faced in designing and carrying out two recent large-scale evaluations of programmes designed to foster the use of ICT in UK higher education. Key concerns that have been identified within the evaluation literature are considered and an account is given of how these concerns were addressed within the two studies. A detailed examination is provided of the general evaluative strategies of employing a multi-disciplinary team and a multi-method research design and of how the research team went about: tapping into a range of sources of information, gaining different perspectives on innovation, tailoring enquiry to match vantage points, securing representative ranges of opinion, coping with changes over time, setting developments in context and dealing with audience requirements. Strengths and limitations of the general approach and the particular tactics that were used to meet the specific challenges posed within these two evaluation projects are identified.

"Peering Through a Glass Darkly: Integrative evaluation of an on-line course"
 http://ifets.ieee.org/periodical/vol_4_2000/taylor.html 
Josie Taylor (There are also other authors listed for this article)
Senior Lecturer, Institute of Educational Technology
The Open University, Walton Hall
Milton Keynes MK7 6AA United Kingdom
j.taylor@open.ac.uk
 
Tel: +44 1908 655965

In this study we describe a wide-spectrum approach to the integrative evaluation of an innovative introductory course in computing. Since both the syllabus, designed in consultation with industry, and the method of presentation of study materials are new, the course requires close scrutiny. It is presented in the distance mode to a class of around 5,000 students and uses a full range of media: paper, broadcast television, interactive CD-ROM, a Web-oriented programming environment, a Web site and computer conferencing. The evaluation began with developmental testing whilst the course was in production, and then used web-based and paper-based questionnaires once the course was running. Other sources of data, in the form of observation of computing conferences and an instrumented version of the Smalltalk programming environment, also provide insight into students’ views and behaviour. This paper discusses the ways in which the evaluation study was conducted and lessons we learnt in the process of integrating all the information at our disposal to satisfy a number of stakeholders.

"An evaluation model for supporting higher education lecturers in the integration of new learning technologies" 
http://ifets.ieee.org/periodical/vol_4_2000/joyes.html 
Gordon Joyes 
Teaching Enhancement Advisor and Lecturer in Education School of Education
University of Nottingham Jubilee Campus, Wollaton Road Nottingham, NG8 1BB United Kingdom Gordon.Joyes@nottingham.ac.uk  Tel: +44 115 9664172 Fax: +44 115 9791506

This paper provides a description and some reflections on the ongoing development and use of an evaluation model. This model was designed to support the integration of new learning technologies into courses in higher education. The work was part of the Higher Education Funding Council for England (HEFCE) funded Teaching and Learning Technology Programme (TLTP). The context and the rationale for the development of the evaluation model is described with reference to a case study of the evaluation of the use of new learning technologies in the civil and structural engineering department in one UK university. Evidence of the success of the approach to evaluation is presented and the learning media grid that arose from the evaluation is discussed. A description of the future use of this tool within a participatory approach to developing learning and teaching materials that seeks to embed new learning technologies is presented.

"A multi-institutional evaluation of Intelligent Tutoring Tools in Numeric Disciplines" 
http://ifets.ieee.org/periodical/vol_4_2000/kinshuk.html
  
Kinshuk (there are other authors listed for this article)
Information Systems Department 
Massey University, Private Bag 11-222 Palmerston North, New Zealand Tel: +64 6 350 5799 Ext 2090 Fax: +64 6 350 5725 kinshuk@massey.ac.nz 

This paper presents a case study of evaluating intelligent tutoring modules for procedural knowledge acquisition in numeric disciplines. As Iqbal et al. (1999) have noted, the benefit of carrying out evaluation of Intelligent Tutoring Systems (ITS) is to focus the attention away from short-term delivery and open up a dialogue about issues of appropriateness, usability and quality in system design. The paper also mentions an independent evaluation and how its findings emphasise the need to capture longer-term retention.

"Avoiding holes in holistic evaluation" 
http://ifets.ieee.org/periodical/vol_4_2000/shaw.html
 
Malcolm Shaw 
Academic Development Manager The Academic Registry, Room F101 Leeds Metropolitan University Calverley Street, Leeds, LS1 3HE, UK m.shaw@lmu.ac.uk Tel: +44 113 283 3444 Fax: +44 113 283 3128

Suzanne Corazzi 
Course Leader, Cert. in English with Prof. Studies Centre for Language Studies, 
Jean Monnet Building Room G01 Leeds Metropolitan University Beckett Park, Leeds, LS6 3QS, UK s.corazzi@lmu.ac.uk  Tel: +44 113 283 7440 Fax: +44 113 274 5966

The paper describes the evaluation strategy adopted for a major Teaching and Learning Technology Programme (TLTP3) funded project involving Leeds Metropolitan University (LMU), Sheffield Hallam Univeristy (SHU) and Plymouth University. The project concerned the technology transfer of a web-based learning resource that supports the acquisition of Key Skills from one of the Universities (LMU) to the others, and its customisation for these new learning environments.

The principles that guided the development of the evaluation strategy are outlined and the details of the methods employed are given. The practical ways in which this large project approached the organisation and management of the complexities of the evaluation are discussed. Where appropriate, examples of the sort of procedures and tools used are also provided.

Our overarching aim in regard to evaluation was to take a thorough and coherent approach that was holistic and that fully explored all the main aspects in the project outcomes. The paper identifies the major issues and problems that we encountered and the conclusions that we have reached about the value of our approach in a way that suggests its potential usefulness to others operating in similar circumstances.

"Classroom Conundrums: The Use of a Participant Design Methodology" 
http://ifets.ieee.org/periodical/vol_4_2000/cooper.html
 
Bridget Cooper and Paul Brna 
Computer Based Learning Unit, Leeds University Leeds LS2 9JT, England, UK Tel: +44 113 233 4637 Fax: +44 113 233 4635 bridget@cbl.leeds.ac.uk paul@cbl.leeds.ac.uk 

We discuss the use of a participant design methodology in evaluating classroom activities in the context of an ongoing European funded project NIMIS, (Networked Interactive Media in Schools). We describe the thinking behind the project and choice of methodology, including a description of the pedagogical claims method utilised, the way in which it was carried out and some of the interim results and the issues raised in the process.

Though the project is situated in three European schools, we concentrate here on the evaluation in one UK school in particular: Glusburn County Primary school, near Leeds. The classroom has been very well received by teachers and pupils and the preliminary evaluation suggests some beneficial effects for both teachers and pupils, as well as long term consequences from the participant design methodology for some of the participants.

"Evaluating information and communication technologies for learning" 
http://ifets.ieee.org/periodical/vol_4_2000/scanlon.html
 
 Eileen Scanlon, Ann Jones, Jane Barnard, Julie Thompson and Judith Calder 
Institute for Educational Technology The Open University, Milton Keynes MK7 6AA United Kingdom e.scanlon@open.ac.uk  Tel: +44 1908 274066

In this paper we will describe an approach to evaluating learning technology which we have developed over the last twenty-five years, outline its theoretical background and compare it with other evaluation frameworks. This has given us a set of working principles from evaluations we have conducted at the Open University and from the literature, which we apply to the conduct of evaluations. These working practices are summarised in the context interactions and outcomes (CIAO!) model. We describe here how we applied these principles, working practices and models to an evaluation project conducted in Further Education. We conclude by discussing the implications of these experiences for the future conduct of evaluations.

"A Large-scale ‘local’ evaluation of students’ learning experiences using virtual learning environments" 
http://ifets.ieee.org/periodical/vol_4_2000/richardson.html
  
Julie Ann Richardson 
3rd Floor, Weston Education Centre Guys, King’s & St. Thomas’ Hospital Cutcombe Rd., London, SE5 9RJ United Kingdom julie.richardson@kcl.ac.uk  Tel: +44 207 848 5718 Fax: +44 207 848 5686

Anthony Turner 
Canterbury Christ Church University 
College North Holmes Rd., Canterbury, CT1 1QU United Kingdom a.e.turner@cant.ac.uk  Tel: +44 1227 782880

In 1997-8 Staffordshire University introduced two Virtual Learning Environments (VLEs), Lotus Learning Space, and COSE (Creation of Study Environments), as part of its commitment to distributed learning. A wide-reaching evaluation model has been designed, aimed at appraising the quality of students’ learning experiences using these VLEs. The evaluation can be considered to be a hybrid system with formative, summative and illuminative elements. The backbone of the model is a number of measuring instruments that were fitted around the educational process beginning in Jan 1999.

This paper provides an overview of the model and its implementation. First, the model and evaluation instruments are described. Second, the method and key findings are discussed. These highlighted that students need to feel more supported in their learning, that they need more cognitive challenges to encourage higher-order thinking and that they prefer to download their materials to hard copy. In addition, tutors need to have a greater awareness of the ways individual differences influence the learning experience and of strategies to facilitate electronic discussions. Generally, there should be a balance between learning on-line and face-to-face learning depending on the experience of tutors, students, and the subject.

Finally the model is evaluated in light of the processes and findings from the study.

"Towards a New Cost-Aware Evaluation Framework" 
http://ifets.ieee.org/periodical/vol_4_2000/ash.html
 
Charlotte Ash 
School of Computing and Management Sciences Sheffield Hallam University Stoddart Building, Howard Street Sheffield, S1 1WB, United Kingdom Tel: +44 114 225 4969 Fax: +44 114 225 5178 c.e.ash@shu.ac.uk 

This paper proposes a new approach to evaluating the cost-effectiveness of learning technologies within UK higher education. It identifies why we, as a sector, are so unwilling to base our decisions on results of other studies and how these problems can be overcome using a rigorous, quality-assured framework which encompasses a number of evaluation strategies. This paper also proposes a system of cost-aware university operation, including integrated evaluation, attainable through the introduction of Activity-Based Costing. It concludes that an appropriate measure of cost-effectiveness is essential as the sector increasingly adopts learning technologies.

"W3LS: Evaluation framework for World Wide Web learning" 
http://ifets.ieee.org/periodical/vol_4_2000/veen.html
 
Jan van der Veen  (There are other authors of this article)
DINKEL Educational Centre University of Twente p.o.box 217, 7500AE Enschede The Netherlands Tel: +31 53 4893273 Fax: +31 53 4893183 j.t.vanderveen@dinkel.utwente.nl 

An evaluation framework for World Wide Web learning environments has been developed. The W3LS (WWW Learning Support) evaluation framework presented in this article is meant to support the evaluation of the actual use of Web learning environments. It indicates how the evaluation can be set up using questionnaires and interviews among other methods. The major evaluation aspects and relevant 'stakeholders' are identified. First results of cases using the W3LS evaluation framework are reported from different Higher Education institutes in the Netherlands. The usability of the framework is evaluated, and future developments in the evaluation of Web learning in Higher Education in the Netherlands are discussed.

Once again, the main website is at http://ifets.ieee.org/periodical/vol_4_2000/v_4_2000.html 


E-Learner Competencies, by P.Daniel Birch, Learning Circuits --- http://www.learningcircuits.org/2002/jul2002/birch.html 

Training managers and online courseware designers agree that e-learning isn't appropriate for every topic. But e-learning also may not be the right fit for all types of learners. Here are some of the behaviors of a successful e-learner. Do you have them?

Much has been said about the impact e-learning has on content developers, trainers, and training managers. When the conversation turns to learners, attention focuses on the benefits of less travel and fewer hours spent away from jobs. However, those issues don't create an entire picture of how e-learning affects participants.

The industry needs to take a closer look at how learning behaviors might adapt in an online environment. In other words, how do the skills that serve learners well in a classroom or during on-the-job learning translate to self-paced and virtual collaboration learning experiences? Do learners need new competencies? Will an organization find that some of its employees have e-learning disabilities?

In general, three major factors influence an e-learner's success:

Continued at  http://www.learningcircuits.org/2002/jul2002/birch.html 

Links
How to be an E-Learner
Something to Talk About: Tips for Communicating in an Electronic Environment

Bob Jensen's Threads on assessment --- 

 

 

 


The Criterion Problem

A message from Professor XXXXX

I recently submitted an article on Assessment Outcomes for distance education (DE) to "The Technology Source". The editor suggested that I include a reference to profiling the successful DE student because he was sure some research existed on the subject. Well I have been looking for it casually for 3 years in my reading and the 3-4 conferences per year that I attend, and never have come across anything. Have spent the last week looking in InfoTrac and reviewed close to 300 abstracts, without a single good lead. You are the man. So hoping you can answer the question - is there any empirical research on the question of profiling a successful DE student and in particular any research where an institution actually has a hurdle for students to get into DE based on a pedagogically sound questionnaire? Hoping you know the answer and have time to respond.

Reply from Bob Jensen

Hi XXXXX,

I am reminded of a psychology professor, Tom Harrell, that I had years ago at Stanford University.  He had a long-term contract from the U.S. Navy to study Stanford students when they entered the MBA program and then follow them through their careers.  The overall purpose was to define predictors of success that could be used for admission to the Stanford GSB (and extended to tests for admission into careers, etc.)  Dr, Harrell's research became hung up on "The Criterion Problem   (i.e., the problem of defining and measuring "success.")  You will have the same trouble whenever you try to assess graduates of any education program whether it is onsite or online.  What is success?  What is the role any predictor apart from a myriad of confounded variables?

You might take a look at the following reference:
Harrell, T.W. (1992). "Some history of the army general classifications test," Journal of Applied Psychology, 77, 875-878.

Success is a relative term.  Grades not always good criteria for assessment.  Perhaps a C student is the greatest success story of a distance education program.  Success may lie in motivating a weak student to keep trying for the rest of life to learn as much as is possible.  Success may lie in motivating a genius to channel creativity.  Success may lie in scores on a qualification examination such as the CPA examination.  However, use of "scores" is very misleading, because the impact of a course or entire college degree is confounded by other predictors such as age, intellectual ability, motivation, freedom to prepare for the examination, etc.  

Success may lie in advancement in the workforce, but promotion and opportunity are subject to widely varying and often-changing barriers and opportunities.  A program's best graduate may end up on a dead end track, and its worst graduate may be a maggot who fell in a manure pile.  For example, it used to be virtually impossible for a woman to become a partner in a large public accounting firm.  Now the way is paved with all sorts of incentives for women to hang in there and attain partnership. Success also entails being at the right place at the right time, and this is often a matter of luck as well as ability.  George Bush probably would never have had an opportunity to become one of this nation's best leaders if there had not been a terrorist attack that afforded him such an opportunity.  Certainly this should not be termed "lucky," but it is a rare "opportunity" to be a great "success."

When it comes to special criteria for acceptance in to distance education programs, there are some who feel that, due to fairness, there should be no special criteria beyond the criteria for acceptance into traditional programs.  For example, see the Charles Stuart University document at  http://www.csu.edu.au/acadman/d13m.htm 

You might find some helpful information in the following reference --- http://202.167.121.158/ebooks/distedir/bestkudo.htm 

Phillips, V., & Yager, C. The best distance learning graduate schools: Earning your degree without leaving home.
This book profiles 195 accredited institutions that offer graduate degrees via distance learning. Topics include: graduate study, the quality and benefits of distance education, admission procedures and criteria, available education delivery systems, as well as accreditation, financial aid, and school policies.

A review is given at http://distancelearn.about.com/library/weekly/aa022299.htm 

More directly related to your question, might be the self assessment suggestions at Excelsior College:

Self Assessment -- http://gl.excelsior.edu/epn2/ec_open.nsf/pages/assess.htm 

Another self assessment process is provided by ISIM University at http://www.isimu.edu/foryou/begin/eprocess.htm 

In self assessment processes it is sometimes difficulty to determine whether the motivation is one of promotion of the program as opposed to assessment for having students self-select whether to apply or not to apply.

You might be able to contact California State University at Fullerton to see if they will share some of their assessment outcomes of online learning courses. A questionnaire that is used there is at http://de-online.fullerton.edu/de/assessment/assessment.asp 

Some good assessment advice is given at http://www.ala.org/acrl/paperhtm/d30.html 

A rather neat PowerPoint show from Brazil is provided at http://www.terena.nl/tnc2000/proceedings/1B/1b2.ppt  
(Click on the slides to move forward.)

The following references are given at 

  1. Faculty Course Evaluation Form
    University of Bridgeport
  2. Web-Based Course Evaluation Form
    Nashville State Technology Institute
  3. Guide to Evaluation for Distance Educators
    University of Idaho Engineering Outreach Program
  4. Evaluation in Distance Learning: Course Evaluation
    World Bank Global Distance EducatioNet

A Code of Assessment Practice is given at http://cwis.livjm.ac.uk/umf/vol5/ch1.htm 

A comprehensive outcomes assessment report (for the University of Colorado) is given at http://www.colorado.edu/pba/outcomes/ 

A Distance Learning Bibliography is available at http://mason.gmu.edu/~montecin/disedbiblio.htm 

Also see "Integration of Information Resources into Distance Learning Programs"  by Sharon M. Edge and Denzil Edge at http://www.learninghouse.com/pubs_pubs02.htm 

My bottom line conclusion is that I probably did not help you with the specific help you requested.  At best, I provided you with some food for thought.


 

Onsite Versus Online

I think the following applies to education as well as any business corporation.  The problem is that universities are notoriously slow to change relative to such organizations as business firms and the military.

New Technology for Proctoring Distance Education Examinations

"Proctor 2.0," by Elia Powers, Inside Higher Ed, June 2, 2006 --- http://www.insidehighered.com/news/2006/06/02/proctor

It’s time for final exams. You’re a student in Tokyo and your professor works in Alabama. It’s after midnight and you’re ready to take the test from your bedroom. No problem. Flip open your laptop, plug in special hardware, take a fingerprint, answer the questions and you’re good to go.

Just know this: Your professor can watch your every move ... and see the pile of laundry building up in the corner of the room.

Distance learning programs – no matter their structure or locations – have always wrestled with the issue of student authentication. How do you verify that the person who signed up for a class is the one taking the test if that student is hundreds, often thousands, of miles away?

Human oversight, in the form of proctors who administer exams from a variety of places, has long been the solution. But for some of the larger distance education programs — such as Troy University, with about 17,000 eCampus students in 13 time zones — finding willing proctors and centralized testing locations has become cumbersome.

New hardware being developed for Troy would allow faculty members to monitor online test takers and give students the freedom to take the exam anywhere and at any time. In principle, it is intended to defend against cheating. But some say the technology is going overboard.

Sallie Johnson, director of instructional design and education technologies for Troy’s eCampus, approached Cambridge, Mass.-based Software Secure Inc. less than two years ago to develop a unit that would eliminate the need for a human proctor. Johnson said the hardware is the university’s response to the urgings of both Congress and regional accrediting boards to make authentication a priority.

The product, called Securexam Remote Proctor, would likely cost students about $200. The unit hooks into a USB port and does not contain the student’s personal information, allowing people to share the product. The authentication is done through a server, so once a student is in the database, he or she can take an exam from any computer that is hardware compatible.

A fingerprint sensor is built into the base of the remote proctor, and professors can choose when and how often they want students to identify themselves during the test, Johnson said. In the prototype, a small camera with 360-degree-view capabilities is attached to the base of the unit. Real-time audio and video is taken from the test taker’s room, and any unusual activity — another person walking into the room, an unfamiliar voice speaking — leads to a red-flag message that something might be awry.

Professors need not watch students taking the test live; they can view the streaming audio or video at any time.

“We can see them and hear them, periodically do a thumb print and have voice verification,” Johnson said. “This allows faculty members to have total control over their exams.”

Douglas Winneg, president of Software Secure, said the new hardware is the first the company has developed with the distance learning market in mind. It has developed software tools that filter material so that students taking tests can’t access any unauthorized material.

Winneg, whose company works with a range of colleges, said authentication is “a painful issue for institutions, both traditional brick-and-mortar schools and distance learning programs.”

Troy is conducting beta tests of the product at its home campus. Johnson said by next spring, the Securexam Remote Proctor could commonly be used in distance learning classes at the university, with the eventual expectation that it will be mandatory for students enrolled in eCampus classes.

Bob Jensen's threads on emerging tools of our trade --- http://www.trinity.edu/rjensen/000aaa/thetools.htm

 


From Syllabus News on December 9, 2003

MIT Sloan Professor: Use Tech to Reinvent Business Processes

Many private companies are using technology to keep down their labor costs, but the key to sustained growth and revived employment lies in whether they will successfully use technology to redesign the basic way they operate, says MIT Sloan Prof. Erik Brynjolfsson, director of the Center for eBusiness at MIT Sloan.

In his research, Brynjolfsson found widely different outcomes among companies that spent similar amounts on technology, the difference being in what managers did once the new tech was in place. "Some companies only go part way," said Brynjolfsson, an expert on information technologies and productivity. "They use technology to automate this function or to eliminate that job. But the most productive and highly valued companies do more than just take the hardware out of the box. They use IT to reinvent their business processes from top to bottom. Managers who sit back and assume that gains will come from technology alone are setting themselves up for failure."

Bob Jensen's related threads are at the following URLs:

Management and costs --- http://www.trinity.edu/rjensen/distcost.htm 


May 5, 2005 message from Carolyn Kotlas [kotlas@email.unc.edu]

NEW E-JOURNAL ON LEARNING AND EVALUATION

STUDIES IN LEARNING, EVALUATION, INNOVATION AND DEVELOPMENT is a new peer-reviewed electronic journal that "supports emerging scholars and the development of evidence-based practice and that publishes research and scholarship about teaching and learning in formal, semi-formal and informal educational settings and sites." Papers in the current issue include:

"Can Students Improve Performance by Clicking More? Engaging Students Through Online Delivery" by Jenny Kofoed

"Managing Learner Interactivity: A Precursor to Knowledge Exchange" by Ken Purnell, Jim Callan, Greg Whymark and Anna Gralton

"Online Learning Predicates Teamwork: Collaboration Underscores Student Engagement" by Greg Whymark, Jim Callan and Ken Purnell

Studies in Learning, Evaluation, Innovation and Development [ISSN 1832-2050] will be published at least once a year by the LEID (Learning, Evaluation, Innovation and Development) Centre, Division of Teaching and Learning Services, Central Queensland University, Rockhampton, Queensland 4702 Australia. For more information contact: Patrick Danaher, tel: +61-7-49306417; email: p.danaher@cqu.edu.au. Current and back issues are available at http://www.sleid.cqu.edu.au/index.php .


Important Distance Education Site
The Sloan Consortium --- http://www.aln.org/
The purpose of the Sloan Consortium (Sloan-C) is to help learning organizations continually improve quality, scale, and breadth according to their own distinctive missions, so that education will become a part of everyday life, accessible and affordable for anyone, anywhere, at any time, in a wide variety of disciplines.


Salem-Keizer Online, or S.K.O., is one in a growing number of public, private and charter schools available to kids who are looking for an alternative to a traditional education. Commonly called ''virtual school,'' it's a way of attending school at home without the hovering claustrophobia of home-schooling.

"School Away From School," by Emily White, The New York Times, December 7, 2003 ---  http://www.nytimes.com/2003/12/07/magazine/07CYBER.html 

Virtual school seems like an ideal choice for kids who don't fit in or can't cope. ''I'm a nervous, strung-out sort of person,'' says Erin Bryan, who attends the online Oregon-based CoolSchool. Erin used to attend public school in Hood River, Ore., but ''I didn't like the environment,'' she says. ''I am afraid of public speaking, and I would get really freaked out in the mornings.''

Kyle Drew, 16, a junior at S.K.O., says: ''I couldn't get it together. I was skipping more and more classes, until I was afraid to go to school.'' Leavitt Wells, 13, from Las Vegas, was an ostracized girl with revenge on her mind. ''The other kids didn't want anything to do with me,'' she says. ''I'd put exploded gel pens in their drawers.'' Now she attends the Las Vegas Odyssey Charter School online during the day, and when her adrenaline starts pumping, she charges out into the backyard and jumps on the trampoline.

On S.K.O.'s Web site, students can enter a classroom without being noticed by their classmates by clicking the ''make yourself invisible'' icon -- a good description of what these kids are actually doing. Before the Internet, they would have had little choice but to muddle through. Now they have disappeared from the school building altogether, a new breed of outsider, loners for the wired age.

Douglas Koch is only 12, but he is already a high-school sophomore. He says that he hopes to graduate by the time he's 15. Today he sits at his computer in his Phoenix living room -- high ceilings and white walls, a sudden hard rain stirring up a desire to look out the shuttered windows. Douglas's 10-year-old brother, Gregory, is stationed across the room from him -- he is also a grade-jumper. The Koch brothers have been students at the private Christa McAuliffe Academy, an online school, for more than a year now. While S.K.O. is a public school, C.M.A. is private, charging $250 a month and reaching kids from all over the country. From Yakima, Wash., it serves 325 students, most of whom attend classes year-round, and employs 27 teachers and other staff members.

The first section of this article is not quoted here.


For those of you who think distance education is going downhill, think again.  The number of students switching from traditional brick-and- mortar classrooms to full-time virtual schools in Colorado has soared over the past five years…

"Online Ed Puts Schools in a Bind:  Districts Lose Students, Funding," by Karen Rouse, Denver Post, December 2, 2004 --- http://www.denverpost.com/Stories/0,1413,36%257E53%257E2522702,00.html 

The number of students switching from traditional brick-and- mortar classrooms to full-time virtual schools in Colorado has soared over the past five years.

During the 2000-01 school year, the state spent $1.08 million to educate 166 full-time cyberschool students, according to the Colorado Department of Education. This year, the state projects spending $23.9 million to educate 4,237 students in kindergarten through 12th grade, state figures show.

And those figures - which do not include students who are taking one or two online courses to supplement their classroom education - are making officials in the state's smallest districts jittery.

Students who leave physical public schools for online schools take their share of state funding with them.

"If I lose two kids, that's $20,000 walking out the door," said Dave Grosche, superintendent of the Edison 54JT School District.

Continued in the article

December 3, 2004 Reply from Steve Doster [sdoster@SHAWNEE.EDU

Are there any internal controls that would discourage an unethical distance learning student from simply hiring another to complete his distance learning assignments and essentially buying his grade?

Steve

December 3, 2004 reply from Amy Dunbar [Amy.Dunbar@BUSINESS.UCONN.EDU

In the graduate accounting distance learning classes at UConn, the students work in groups in chat rooms. Students are graded on participation in these groups (by the other students in my classes). They meet each other in a one-week in-residence session at the beginning of the MSA program. If a student hired another student in his/her place, that impersonator would have to follow through on group work, which isn’t likely. I taught 76 students this past summer, and perhaps I am naïve, but I would be surprised if I had any impersonators. Working with the students through instant messenger and in chat rooms really creates strong relationships, and I think I could detect impersonators quickly. In fact, a sibling of a student logged on using his brother’s AIM login, and after two sentences, I asked who was on the other end. The brother admitted who he was. It’s harder to fake than you might think. All that said, I really am not all that concerned about online cheating. These courses are expensive, and if a student really wants to cheat, s/he can do it, whether the course is FTF or distance. I do not see myself as a monitor for graduate students. My attitude would be much different for undergrads, but I think that grads are far more goal oriented, and cheating is less of a concern.

December 3, 2004 reply from Bruce Lubich [blubich@UMUC.EDU

I would echo what Amy has said. At University of Maryland University College, our online courses are taught in asynchronous mode. It doesn't take long to learn the student's communication styles. When that changes, it stands out like a sore thumb. Of course, there are times when a student will submit someone else's work. I've had other students turn those students in. Whether I catch them or a student turns them in, it's handled very quickly and strictly. Students know the implications for cheating are very harsh. Having said all that, the other element is the students themselves. We deal with adult graduate students who have work experience and goals in mind. They are smart enough to know that they only cheat themselves from learning and reaching their objectives when they cheat. Does that sound ideal and naive? Maybe. But I've had many students say that to me. Mature students are not stupid.

I would also point out that when comparing 20 years of teaching in f2f classrooms, I have not experienced an increase in cheating. Let's face it. Students who want to cheat will find a way. Does it really matter whether they're online if all they have to do is use their camera phone to send a picture of the test answers to someone on the other side of the room?

I understand the skepticism and concern about cheating in the online environment. But as more and more of you move into that environment, you'll discover that the concern is no more than what exists in the f2f environment.

December 3, 2004 reply from David Fordham, James Madison University [fordhadr@JMU.EDU

Steve,

Depends on how you define distance education.

At JMU's on-line MBA infosec program, we require an in- person meeting at the beginning and again at the end of each course. Everyone has to fly into Washington Dulles and meet at the Sheraton in Herndon every 8 weeks during the 2-year program. Friday afternoon and Saturday is the wrap-up of the previous course, and Saturday evening and Sunday is the start of the new course.

In between the in-person meetings, students meet weekly or twice-weekly on-line (synchronous) using Centra Symposium, supplemented by Blackboard-based resources, plus Tegrity recorded lectures and presentations.

During the very first in-person meeting, we take pictures of every student, mainly to help the professors put a face with the name before the courses begin. During the Saturday- afternoon-Sunday meeting at the start of a course, the instructor gets to know the students personally, putting faces with names and voices. Then, for the following eight weeks while on-line, the professor has a pretty good handle of who he's interacting with.

I believe it would be fairly easy for me to spot a phony on- line, not only by voice, but also attitudes, approaches, beliefs, experiences, and backgrounds. Our program is very interactive in real time, requires significant group work, and other inter-personal activities.

Then, at the end of the eight weeks, the students get back together for a Friday-afteroon-Saturday morning session with the professor for the final examination, case presentations, etc. Again, I would be able to easily recognize someone outside the class based on my 45 hours of interaction with them over the previous 8 weeks. It would be obvious if a student's level of knowledge and understand, energy, motivation, attitudes, opinions, reasoning and logic etc. were atypical of that student's experience with me in class.

So in our case, the in-person meeting requirement every 8 weeks serves, we believe, as sufficient internal control to prevent the substitution from going undetected.

I'm interested in other experiences and opinions.

David Fordham 
James Madison University

December 3, 2004 reply from Barbara Scofield [scofield@GSM.UDALLAS.EDU

As a member of the UT System MBA Online Academic Affairs Committee from 1998-2004, I watched new online faculty and instructors deal with the issue of how do you know who is doing the work over and over again new classes were added and board members rotated. The program was explicitly set up to require no synchronous communications and no proctored exams. (As the courses developed, at least one course did come to require synchronous communciation, but students were given wide lattitude to schedule their hearings in business law -- and the instructor grew to regret his choice of methodology as the enrollment increased.)

The control for unethical online students is basically that it is too much work if the online class includes regular interactions with both the instructor and other students. If an online instructor has regular interactions with his or her students, then the instructor has the usual information to evaluate whether a particular paper or test answer is written by the student or by a proxy. Some online students complain about "busy work" that involves reading, researching, and responding to narrative materials online as part of the "lecture" component of a class -- and online faculty find it time consuming to provide such interactivity with course content. But in my mind this type of material in an online course is the very "control" you are asking about.

Barbara W. Scofield, PhD, CPA 
Associate Professor of Accounting 
University of Dallas |1845 E. Northgate 
Irving, TX 75062 

December 3, 2004 reply from Chuck Pier [texcap@HOTMAIL.COM

Barbara I think your explanation of the controls is exactly what I have experienced. I have not taken an online course, or even taught one, but my wife completed her entire MS in Library Science online through North Texas. My observations from watching her were that the amount of work and asynchronus communication required were significant. The course required extensive reading and would be expensive to pay someone else to do the wrok for the student, although I am sure that it has been done, and will be done in the future. I know that my wife worked a lot more in this online environment than she did in the traditional classroom, and I felt thatmost of the work was an attempt to validate the lack of traditional testing, even in the online format.

This might also explain Laurie's comment about the virtual experience being more satisfying than the traditional courses. Based on my wife's experience and Barbara's comments I would think that the amount of work also creates a sense of "ownership" in an online students experience.

However, based on the amount of work required, I know that these programs are not for everyone. You have to be mature and dedicated to put in the time required to be succesful. Based on what I see in my classroom, I am not worried about on-line education supplanting me my colleagues anytime in the future.

Chuck

December 3, 2004 reply from Patricia Doherty [pdoherty@BU.EDU

I co-teach in a distance-learning program for Seton Hall, and echo what others have said. We have threaded discussions of cases online, and the students are also members of teams, with a separate thread for each team to discuss the week's written (team) assignment. They really do have "online personalities," and those are revealed to everyone in the class, after the first week of these dual discussions, not just to the instructors, but to the other students, so I think an imposter, unless they actually did the course from start to finish "as someone," would quickly be noticed.

We see the thought process they go through as they formulate assignments - they even upload preliminary work as they progress. So, a final version completely different from the preliminary would, again, be noticed. And each team works on the assignment together, with one person - sometimes a different person each week - delegated to submit the final version. Again, that's hard to cheat on. The final assignment is individual, and I think we'd notice immediately if the work were very different from what we have seen of a person for an entire course. That said, anyone motivated enough to cheat could find a way. The question is whether we want to waste our time devising ever more complicated schemes to thwart each new cheating plan, making the courses less pleasant for the students who don't cheat, as well as for the teachers. or whether we prefer to spend the time making the course as rich and productive and useful, and as close to a face-to-face experience, as we can.

p

December 3, 2004 reply from Charlie Betts [cbetts@COLLEGE.DTCC.EDU

Hi Steve,

I doubt that there are any 100% controls to prevent cheating in online courses, just as there are no 100% accounting control to prevent fraud throught collusion, but there are controls that can at least minimize the possibility that cheating will occur.

I agree with the comments of Amy and the other respondents to your question, and I would feel comfortable with what they are doing in their courses if I were teaching those graduate level courses. But I'm a teacher in a community college and one of the online courses that I teach on a regular basis is the first principles course. Over fifty percent of my students in a typical class are not accounting majors and are taking the course only because it's a requirement for graduation in their major. There's also usually a small precentage of students from other colleges and universities in the classes although for the summer session this percentage is often quite large. Given those circumstances, I feel that I have to have more safequards in place to ensure that the work I receive from students is their own.

The primary control that I use is a requirement that three of the six tests in the course must be proctored. This is not a problem with our own students since each of my college's (Delaware Tech) four campuses have testing centers that are open in the evenings and on weekends. All my tests are online, but I "password protect" the proctored tests. For each proctored test, I email each testing center a list of the students who will be taking the test, the password, and any special testing instructions. The testing centers check the students picture ID before they are admitted to the testing center.

Part of each students grade is a project somewhat similar to a traditional practice set, which I have modified so that it can be completed on Excel worksheets, which I provide. When the student has completed this work, I require them to take what I call an "audit" test on their work. This is a short test that asks them simply to look up certain figures from their completed work and to repeat certain calculations they had to make. This audit test must also be proctored. The audit test is a simple test for someone who has done their own work, but would be very difficult for someone to pass who had "hired" someone to do their work for them.

For students who are unable to take the proctored tests at one of our testing centers, I require them to provide a proctor whom I must approve. Since most schools have testing centers of some sort this is usually done through their school's testing center. Other proctors that students have provided have professor's at their school, school libraries, ministers, local CPA's etc. For one student who started the course as a local student and finished it on temporary duty in Iraq, the proctor was the student's company commander. The student is responsible for providing the proctor and the proctor must establish their identity in some why, usually by a letter to me on their school/company letterhead.

I've compared the scores from both the proctored and unproctored tests in my online courses with the scores of identical tests given in face-to-face courses and there is no significant difference, although the proctored online scores do tend to be slightly higher, a difference I attribute to the slightly better quality of student I find in the online classes.

I know this seems like a cumbersome system - I sometimes think it is myself - but for a beginning principles course I feel that these or similar safeguards are necessary, and in practice it really works much smoother than it would seem from my description.

I've really only had one problem and that occurred last summer. It involved a student at a university in a neighboring state, which I won't name because I hold the university in much higher regard than I do this particular student. After numerous emails which complained in a highly ungrammatical manner that the proctored tests were unfair and gave innumerable reasons why he should be exempt from this requirement, all of which were naturally rejected, I received an email from someone purporting to be be an employee in the school's library and offering to be a proctor for that student's test. Since the email was written in the same ungrammatical style as the student's prior emails, I didn't have to possess the acumen of a Shelock Holmes to be suspicious. But just to be sure I went to the school's web site, located the name and phone number of the libarian, and called her to "verify" the prospective proctor's employment. It was not really a surprise that the librarian had never heard of her "employee." I then emailed the "proctor" to express my surprise that the librarian had no idea who the "proctor" was. This email was shortly followed by another email from the student informing me that he was dropping the course. So even this tale had a happy ending.

Charlie Betts ----------------------------------------------------------- 
It's not so much what folks don't know that causes problems. It's what they do know that ain't so. -
Artemus Ward

Charles M. Betts DTCC, 
Terry Campus 
100 Campus Drive Dover DE 19904 
cbetts@college.dtcc.edu
 

 

Bob Jensen's threads on distance education are at http://www.trinity.edu/rjensen/000aaa/0000start.htm 


November 1, 2003 message from Douglas Ziegenfuss [dziegenf@ODU.EDU

The GAO published a report "Measuring Performance and Demonstrating Results of Information Technology Investments" publication # GAO/AIMD-98-89.

You can retrieve this report from the GAO website at www.gao.gov  and look under reports. Hope this helps.

Douglas E. Ziegenfuss 
Professor and Chair, 
Department of Accounting 
Room 2157 Constant Hall 
Old Dominion University Norfolk, Virginia 23529-0229


Distance Education:  The Great Debate

From Infobits on March 1, 2002

EVALUATION STRATEGIES FOR DISTANCE EDUCATION

"The many factors involved in the success of distance offerings makes the creation of a comprehensive evaluation plan a complex and daunting task. Unfortunately, what may seem the most logical approach to determining effectiveness is often theoretically unsound. For example, comparing student achievement between distance and face-to-face courses may seem a simple solution, yet the design is flawed for a number of reasons. However, theoretically sound approaches do exist for determining the effectiveness of learning systems, along with many different methods for obtaining answers to the relevant questions." In "Measuring Success: Evaluation Strategies for Distance Education" (EDUCAUSE QUARTERLY, vol. 25, no. 1, 2002, pp. 20-26), Virginia Tech faculty Barbara Lockee, Mike Moore, and John Burton explain the factors to consider when evaluating distance education (DE) programs. Sharing the experience gained from DE evaluations at Virginia Tech, they provide guidance to readers who want to set up evaluation plans at their institutions. The article is available online (in PDF format) at http://www.educause.edu/ir/library/pdf/eqm0213.pdf 

The link to the Lockee et al. paper is at http://www.educause.edu/ir/library/pdf/eqm0213.pdf 

Bob Jensen's threads on assessment are at http://www.trinity.edu/rjensen/assess.htm 


From EDUCAUSE at http://www.educause.edu/ 

ACE-EDUCAUSE distance learning monograph published
The American Council on Education (ACE) and EDUCAUSE have just published the second monograph in a series on distributed education. Maintaining the Delicate Balance: Distance Learning, Higher Education Accreditation, and the Politics of Self-Regulation, by Judith S. Eaton, President of the Commission for Higher Education Accreditation, can be accessed in PDF format or purchased from ACE. http://www.educause.edu/asp/doclib/abstract.asp?ID=EAF1002 

Abstract 
Maintaining the Delicate Balance: Distance Learning, Higher Education Accreditation, and the Politics of Self-Regulation is the second monograph in a series of papers on distributed education commissioned by the American Council on Education (ACE) and EDUCAUSE. It describes the impact of distance learning on the balance among accreditation (to assure quality in higher education), institutional self-regulation, and the availability of federal money to colleges and universities. The paper confronts the challenges of protecting students and the public from poor-quality higher education, and attending to quality in an increasingly internationalized higher education marketplace.

View HEBCA proof-of-concept video
Visit the EDUCAUSE Information Resources Library to view the video that was shown at a recent demonstration of the Higher Education Bridge Certification Authority (HEBCA), the Federal Bridge, and the Public Key Interoperability project. Read the press release describing the proof-of-concept event.

NSF releases latest HPNC announcement
In a recently released High Performance Network Connections for Science and Engineering Research (HPNC) announcement, the NSF encourages U.S. institutions of higher education and institutions with significant research and education missions to establish high-performance (at or above 45 megabits per second) Internet connections where necessary to facilitate cutting edge science and engineering research. View the announcement and instructions for proposal submission.


Hi Kevin,

Thank you for the message below.  My concern with John Sanford's report is that critics of distance education often have never tried it.  Or even if they have tried it, they have never tried it with the instant message intensity of an Amy Dunbar --- http://www.trinity.edu/rjensen/book01q3.htm#Dunbar 

I just do not think the armchair critics really appreciate how the Dunbar-type instant messaging pedagogy can get inside the heads of students online.  

But I think it is safe to day that the Sanford-type critics will never have the motivation and enthusiasm to carry off the Dunbar-type instant messaging pedagogy.  For them and many of us (actually I'm almost certain that I could not pull off what Dr. Dunbar accomplishes), it is perhaps more "suicidal" for students.

I also think that success of distance education depends heavily upon subject matter as well as instructor enthusiasm.  But I think there is only a small subset of courses that cannot be carried off well online by a professor as motivated as Dr. Dunbar.

I am truly grateful that I was able to persuade Professor Dunbar and  distance education expert from Duke University to present an all-day workshop in the Marriott Rivercenter Hotel on August 13, 2002.  If our workshop proposal is accepted by the AAA, this is an open invitation to attend.  Details will soon be available under "CPE" at http://accounting.rutgers.edu/raw/aaa/2002annual/meetinginfo.htm 
I wish John Sanford would be there to watch the show.

Thanks for helping me stay informed!  Other views on the dark side are summarized at http://www.trinity.edu/rjensen/000aaa/theworry.htm 

Bob Jensen

Bob, 
Since I know you track information technology WRT education, I thought you might be interested in this. The original source is the "Stanford Report" cited below: TP is a listserv that redistributed it.
Kevin

Folks:

The article below presents an interesting take on the limitations of technology, teaching, and learning. It is from the Stanford Report, February 11, 2002 http://www.stanford.edu/dept/news/report/ . Reprinted with permission.

Regards,

Rick Reis reis@stanford.edu  UP NEXT: Book Proposal Guidelines

 

HIGH-TECH TEACHING COULD BE "SUICIDAL"

BY JOHN SANFORD

University educators largely extol the wonders of teaching through technology. But skeptics question whether something is lost when professors and lecturers rely too heavily on electronic media, or when interaction with students takes place remotely -- in cyberspace rather than the real space of the classroom.

Hans Ulrich Gumbrecht, the Albert Guerard Professor of Literature, is one such skeptic. "I think this enthusiastic and sometimes naïve and sometimes blind pushing toward the more technology the better, the more websites the better teacher and so forth, is very dangerous -- [that it] is, indeed, suicidal," Gumbrecht said, speaking at the Jan. 31 installment of the Center for Teaching and Learning's "Award-WinningTeachers on Teaching" series.

But Gumbrecht cautioned that there are few, if any, studies either supporting or rejecting the hypothesis that traditional pedagogy is superior to teaching via the Internet or with a host of high-tech classroom aids. "If [such studies] exist, I think we need more of them," he said.

He added that he could point only to his "intuition that real classroom presence should be maintained and is very, very important," and emphasized the need for educators to critically examine where technology serves a useful pedagogical function and where it doesn't.

However, Gumbrecht allowed that, for courses in which knowledge transmission is the sole purpose, electronic media probably can do the job well enough. Indeed, given the 20th century's knowledge explosion and the increasing costs of higher education, using technology as opposed to real-life teachers for the transmission of information is probably inevitable, he said.

In any case, knowledge transmission should not be the core function of the university, he added, noting that the Prussian statesman and university founder Wilhelm von Humboldt, sociologist Max Weber and Cardinal John Henry Newman all held that universities should be places where people confront "open questions."

"Humboldt even goes so far to say -- and I full-heartedly agree with him -- they should ideally be questions without a possible answer," Gumbrecht said. He asserted the university should be a place for "intellectual complexification" and "riskful thinking."

"We are not about finding or transmitting solutions; we are not about recipes; we are not about making intellectual life easy," he continued. "Confrontation with complexity is what expands your mind. It is something like intellectual gymnastics. And this is what makes you a viable member of the society."

Paradoxically, "virtual" teacher-student interaction that draws out this kind of thinking probably would be much costlier for the university than real-time, in-class teaching, Gumbrecht said. The reason for this, he suggested, is that responding to e-mail from students and monitoring their discussion online would require more time -- time for which the university would have to pay the teacher -- than simply meeting with the students as a group once or twice a week.

In addition, Gumbrecht asserted that discussions in the physical presence of others can lead to intellectual innovation. He recalled a Heidegger conference he attended at Stanford about a year ago, where he said he participated in some of the best academic discussions of his career. Heidegger himself "tries to de-emphasize thinking as something we, as subjects, perform," Gumbrecht said. "He says thinking is having the composure of letting thought fall into place." Gumbrecht suggested something similar happens during live, in-person discussions.

"There's a qualitative change, and you don't quite know how it happens," he said. "Discussions in the physical presence have the capacity of being the catalyst for such intellectual breakthroughs. The possibility of in-classroom teaching -- of letting something happen which cannot happen if you teach by the transmission of information -- is a strength."

Gumbrecht argued that the way in which students react to the physical presence of one another in the classroom, as well as to the physical presence of their professor, can invigorate in-class discussions. "I know this is problematic territory, but I think both the positive and negative feelings can set free additional energy," he said. "I'm not saying the physical presence makes you intellectually better, but it produces certain energy which is good for intellectual production."

Asked to comment on some of the ideas Gumbrecht discussed in his lecture, Decker Walker, a professor of education who studies technology in teaching and learning, agreed that pedagogy via electronic media may work best in cases where information transmission is the goal -- for example, in a calculus course. In areas such as the humanities and arts, it may be a less valuable tool, he said.

In any case, the physical presence of teachers can serve to motivate students, Walker said. "I think young people are inspired more often by seeing other people who are older -- or even the same age -- who do remarkable things," he said. "It would be hard to replace this with a computer."

On the other hand, Walker maintained that computer technology can be a useful educational aid. One such benefit is access to scholars who are far away. "Technology can enable a conversation, albeit an attenuated online one, with distant experts who bring unique educational benefits, such as an expert on current research on a fast-moving scientific topic," Walker said. "This may greatly enrich a live class discussion with a local professor."

Walker maintained that the university environment is not in danger of being supplanted by technology. On the contrary, he noted, large businesses have adopted aspects of the university environment for their employees' professional education. For example, General Motors started GM University, whose main campus is at the company's new global headquarters in Detroit's Renaissance Center.

Museums also function in some ways like universities, he noted. For example, the Smithsonian Institution has numerous research, museum and zoo education departments

And for all the emphasis high-tech companies put on developing devices and software for remote communication, many have had large campuses constructed where workers are centralized -- a nod, perhaps, to the importance of person-to-person interaction.

Rick Reis, executive director of Stanford's Alliance for Innovative Manufacturing and associate director of the Learning Lab's Global Learning Partnerships, noted that the subject of technology in education covers a lot of territory. Few people, for example, are likely to argue that making students trudge over to the library's reserve desk to get a piece of reading material for a course, or making hundreds of hard copies, is preferable to posting it on the web, Reis said. But he added that whether the kind of teaching generally reserved for a seminar could be as effective online is an open question.

Reply from Amy Dunbar [ADunbar@SBA.UCONN.EDU]

George, 
you wondered about the following Sanford statement: 
>"paradoxically "virtual" teacher-student interaction that 
> draws out this  kind of thinking probably would be much costlier for the 
> university than > real-time, in class-teaching...responding to e-mail 
>from students and monitoring their discussion online would require more 
> time--time for which  the university would have to pay the teacher--- than simply 
> meeting with the  students as a group once or twice a week."

Although I probably do spend more time "teaching" now that I am online (I teach two graduate accounting courses: advanced tax topics and tax research), I think the more important issue for me is "when," not "how much." My students work full time. They are available at night and on weekends, and they prefer to do coursework on weekends. Thus, I spend a lot of time at home in front of my computer with my instant messenger program open. If a student wants to talk, I'm available during pre-determined times. For a compressed six-week summer session with two classes and around 60 students, I live online at night and on weekends. With a regular semester online class, I base my online hours on a class survey of preferences. Last fall I was online from 7 to 9 or 10 at least two nights a week, Saturday afternoons, Sunday mornings for the early birds (an hour or two), and then Sunday evenings from 6 to 10. Sunday evenings were my busiest times. On the other scheduled days, I generally could do other easily interruptible tasks while I was online. Frequently a group of students would call me into a chat room, either on AIM or WebCT. I think that my online presence takes the place of "the physical presence of teachers [which] can serve to motivate students." Students log on to AIM, and they see me online. For my part, I love logging on and seeing my students online. They are just a click away.

Most of my online students think the burden of learning has been shifted to them, and I'm just a "guide on the side." And they are right. Online learning is not for everyone, but as Patricia Doherty noted, live classroom instruction isn't an option for all students, particularly students who travel in connection with their work. And just as not all live classroom instruction encompasses the dynamic interchanges described by Sanford, not all online courses will either, but I have certainly been an observer and a participant in spirited exchanges among students.

As for the comment that the university would have to pay the teacher for additional time, I'm not sure such time is quantifiable because I do other things when I am online but no one is "talking" to me. As a tenure track prof, I'm not sure how that comment would apply in my case in any event. Perhaps where the extra cost arises is in the area of class size. Handling more than 30 students in an online class is difficult. Thus, schools may have to offer more sections of online courses. __________________________________ 

GO HUSKIES!!! (BEWARE OF THE DOG) 
Amy Dunbar ( mailto:adunbar@sba.uconn.edu 860/486-5138 http://www.sba.uconn.edu/users/ADunbar/TAXHOME.htm  
Fax 860-486-4838 
University of Connecticut School of Business, Accounting Department 
2100 Hillside Road, Unit 1041A Storrs, CT 06269-2041

Reply from Dan Gode, Stern School of Business [dgode@STERN.NYU.EDU

David Noble has been one of the foremost critics of distance learning for the last four years. He is widely quoted. I too have found his articles (at http://communication.ucsd.edu/dl/ ) interesting. While discussing them with my colleague today, I could not avoid noticing the irony that he himself is one of the biggest beneficiaries of the internet and distance learning.

Many of us would not have "learned" about his views without the web. He has been able to "teach" his ideas in the distance learning mode almost free only because of the web. In fact, most of the critics of distance learning have achieved their fame precisely because of the knowledge dissemination enabled by the web.

I agree that adoption of distance learning will be much slower than the expectation of many distance learning companies and universities but it will be foolhardy to ignore the gradual technological innovation in education.

A select few in New York can afford the live entertainment of Broadway, most others are grateful for the distance entertainment that is available cheaply to them. Distance learning may not replace classroom learning, but it will surely provide much needed low cost education to many.

Dan Gode 
Stern School of Business 
New York University

Note from Bob Jensen:  You can read more about David Noble at http://www.trinity.edu/rjensen/000aaa/theworry.htm 

Reply from Bob Jensen

Hi Jagdish,

I agree with you to a point. However, I am always suspicious of academics who see only the negative side of a controversial issue. I'm sorry, but I find David Noble to be more of a faculty trade union spokesperson than an academic. Much of his work reads like AAUP diatribe.

Those of you who want to read some of his stuff can to to in my summary of the dark side of distance education at http://www.trinity.edu/rjensen/000aaa/theworry.htm 

I would have much more respect for David Noble if he tried to achieve a little more balance in his writings.

Bob (Robert E.) Jensen Jesse H. Jones Distinguished Professor of Business Trinity University, San Antonio, TX 78212 Voice: (210) 999-7347 Fax: (210) 999-8134 Email: rjensen@trinity.edu  http://www.trinity.edu/rjensen 

-----Original Message----- 
From: J. S. Gangolly [mailto:gangolly@CSC.ALBANY.EDU]  
Sent: Thursday, February 28, 2002 9:37 AM 
To: AECM@LISTSERV.LOYOLA.EDU  
Subject: Re: The Irony of David Noble and other critics of distance learning

Dan,

Let me play the devil's advocate once again; this time I do so with a bit of conviction.

Noble's tirade has been against the commoditisation of instruction and the usurping of what are traditionally regarded as academic faculty prerogatives by the administrators in their quest for revenues (or cutting costs). These are real issues, and a knee-jerk reaction does no one service.

Noble's arguments are based on the actual experiences at UCLA and York. I suppose if he were to rewrite his pieces today, the list would be much longer.

Noble's reservations are also based on the distinct possibility of higher education turning into diploma mills (Reid's observation: "no classrooms," "faculties are often untrained or nonexistent," and "the officers are unethical self-seekers whose qualifications are no better than their offerings.")

I am a great enthusiast for distance learning, but I think the debate Noble is fostering is a very legitimate one. It will at least sensitize us all to the perils of enronisation of higher education. Do we need the cohorts of the likes of Lay and Skilling running the show? What guarantee do we have that once it is commoditised, a non-academic (with or without qualifications and appreciation for higher education) will "manage" it?

I do very strongly feel that distance education has a bright future, but the Noble-like debates will strengthen it in the long run. There is a need for the development of alternative pedagogies, etc.

Back in the late 60s, I was working in a paper mill in the middle of nowhere in India, and I started taking a course in electrical engineering in the distance mode (we used to call it correspondence courses). Unfortunately, those days there was no near universal eccess to computers, and it was not easy. However, it put the burden oif learning on me much more so than in my usual higher education even at decent schools (including one of the IIMs). Unfortunately, I had to discontinue it because of pressure of work.

I look at most existing distance learning today as the model T of education. We need to figure out how we can improve on it, not take it as a matter of faith.

Jagdish

Reply from Paul Williams [williamsp@COMFS1.COM.NCSU.EDU

Jagdish point is well spoken; the issue is the commodification of higher education (and everything else for that matter). "Efficiency" is not the only value humans cherish. There is an interesting article in the last Harper's by Nick Bromell, a professor of English at UMass Amherst, titled Summa Cum Avaritia. Higher education produces substantial revenues and a good deal of the discussion about distance education is really about coopting those revenues (privatizing education for profit).

Reply from George Lan [glan@UWINDSOR.CA

Hi Amy,

Thanks for sharing your on-line experience with us. It shows what flexible learning could achieve. However, those who think that teaching on-line or a dist. ed course is a walk in the park and that on-line courses are cash cows will probably think twice. Administrators should ensure that the classes are not too big so that teh on-line instructor can elicit the kind of interaction and learning that you mention.

The "psychiatrist", "nurse" or sometimes the "gladiator" in me prefers personal contact courses but I do recognize the value of on-line and distance education courses, especially for those to whom live classroom is not an option, as Pat and you have mentioned.

You make a critical point when you mention that "most of my online students think that the burden of learning has been shifted to them, and I'm just a "guide on the side." " Having taught some distance education courses in the past, I've noticed that the drop-out rate seems to be higher in my dist. ed courses (I agree that I have not used the power of technology and the computer to the fullest before) but could some of the students find the burden of learning on their own unbearable? In Canada, the Certified General Accountants have a high quality on-line delivery of courses for those wishing to pursue the accounting designation. In the big city centres, the students also have the choice of attending lectures-- they pay some extra fee (however, all assignments are submitted on-line, usually on a weekly basis and they are graded and returned to the student within 7 days- there is an efficient system of markers and tutors for each course). The onus to learn is on the student and several of them have to repeat the same course several times (which probably is not dependent on whether they choose to attend lectures or not). Financially and time-wise, it can be very costly to the students. But then, as stated by the economist Spence, education is a signal.

George Lan

Reply from Ross Stevenson [ross.stevenson@AUT.AC.NZ

Hi (from the South Pacific) aecmers

I have written heaps of computer based (first year accounting) stuff that students can:

1 Use at their own pace in a teaching computer lab (my classroom) and/or 2 Use on their home computer

When writing the stuff I had 'distance learning' in mind. However, I and most of my students, enjoy the flexible computer lab approach during which they can 1 Listen to me (all stuff projected on large wall screen) or 2 Work at their pace from their monitor

In my mind, there is no doubt that a majority of (first year) students prefer the classroom (dare I say 'non-distance learning') IT approach. Some of my colleagues teach the same course with no more technology than overhead projectors

I am planning some research along the following lines

At beginning of semester, each student completes: 1 An objective profile of themselves (age, gender, English as their first language? etc.)

2 A subjective profile of themselves as to what they perceive are their preferred learning environments (IT based ? classroom? home? etc.)

At end of semester 1 more student feed back as to how they rated my classroom -IT delivery.

PURPOSE OF RESEARCH 
To see if we can survey students at *beginning* of semester and advise them as to which class (lecturer & delivery style) would probably suit them

I would appreciate any references to any research similar to above you are aware of.

Regards

Ross Stevenson 
Auckland Uni of Technology NZ

 

Reply from arul.kandasamy@indosuez.co.uk 

George, 
you asked: could some of the students find the >burden of learning on their own unbearable?

IMO, online learning isn't for everyone. I suggest a switch to the University of Hartford's live grad program when students are dissatisfied with online learning. (UConn's MSA program is an online program.) I have noticed that if students hang in, however, their attitude frequently changes. By the time my students take me for my second online class, most respond to my survey question re: online vs live preference by choosing online. I thank Bob Jensen for his kind words in yesterday's posting, but let there be no doubt that I have students who do not like online learning. For example, one student in my first online class said, "This experience was very new to me and I learned a lot, but my expectations were different b/c I didn't know this was going to be an on-line class. I don't think I could have gotten through this class without the help and support of you and my group members. Above I checked that I would prefer a live classroom setting. Tax can be confusing and I think I would understand the material better if you were telling it to me rather than me reading it on the computer. I learn better by hearing things than by reading them. Even though this class did not completely support my style of learning, I still think it is one of the best classes I have taken, mostly because of the way it is structured - group work. (And also because it has a great teacher.)" (You didn't think I would pick a comment that didn't say something positive about me, did you? ;-)) And "I just think that as much as we interacted with you Dunbar, it's just that much harder because in the end, all of your hard work making the content modules, etc. has to be self-taught on a level that I don't think any of us are accustomed to (or fully capable of yet)."

I am very interested in learning more about Canada's experience with the Certified General Accountants online courses. I didn't realize that live classes were an option. Has anyone compared outcome results for live/online vs strictly online students?

Dunbar


Reply from Thomas C. Omer (E-mail) [tcomer@UIC.EDU

While I haven't paid much attention to David Noble I have paid attention to administrators whose incentives rest on balancing the budget rather than thinking about the educational issues that result from developing or offering online courses. It is critical that faculty who are interested in being involved with distance learning must show some solidarity in rejecting offers of distance learning based on cost measures alone. We are in the business of education, after all, not budget balancing. The extent to which administrations take advantage of faculty members exploring new ways to educate will only reduce our educational institutions to paper mills, a problem some might suggest is already occurring in many settings. Think for a moment about whether the grade you assign to a student is really within your authority, at my home institution and here at UIUC it is not, I also do not have the ability to drop or add students to a class. While this sounds like I am whining (I probably am), it also suggests that my control of the factors affecting the educational experience and outcomes is slowly degrading and adopting distance learning without explicit contracts as to what I am allowed to do and what the administration cannot do sets the stage for making distance learning a nightmare for me and potentially an educational farce for students.

I think Amy's experience has been very positive and I certainly agree that distance learning is not for every student. Unfortunately, my first experience with developing a curriculum based on Distance learning started with a discussion of the cost effectiveness of the approach not the educational issues.

I now step off the soap box,

Congratulations Amy!!!

Thomas C. Omer 
Associate Professor (Visiting) Department of Accountancy 
University of Illinois at Urbana-Champaign



In the SCALE program at the University of Illinois, where students were assigned (I don't think they could choose) either traditional classroom sections or Asynchronous Learning sections, there was a tendency for many students to prefer ALN sections that never met in a live classroom. Presumably, many students prefer ALN sections even if the students are full-time students living on campus. You can read the student evaluations at http://w3.scale.uiuc.edu/  

Also see the above discussion regarding the SCALE Program.

The Problem of Attrition in Online MBA Programs

We expect higher attrition rates from both learners in taking degrees in commuting programs and most online programs.  The major reason is that prior to enrolling for a course or program, people tend to me more optimistic about how they can manage their time between a full-time job and family obligations.  After enrolling, unforseen disasters do arise such as family illnesses, job assignments out of town, car breakdowns, computer breakdowns, job loss or change, etc.

The problem of online MBA attrition at West Texas A&M University is discussed in "Assessing Enrollment and Attrition Rates for the Online MBA," by Neil Terry, T.H.E. Journal, Febrary 2001, pp. 65-69 --- http://www.thejournal.com/magazine/vault/A3299.cfm 

Enrollment and Attrition Rates for Online Courses

Bringing education to students via the Internet has the potential to benefit students and significantly increase the enrollment of an institution. Student benefits associated with Internet instruction include increased access to higher education, flexible location, individualized attention from the instructor, less travel, and increased time to respond to questions posed by the instructor (Matthews 1999). The increase in educational access and convenience to the student should benefit the enrollment of an institution by tapping the time- and geographically-constrained learner. The results presented in Table 1 indicate that online courses are doing just that. Specifically, Internet courses averaged higher enrollments than the campus equivalents in 12 of the 15 business courses. The online delivery had an overall average of 34 students per course, compared to only 25 students in the traditional campus mode.

Although enrollment is relatively high, it is also important to note that the attrition rate was higher in 13 of the 15 online courses. Potential explanations for the higher attrition rates include students not being able to adjust to the self-paced approach in the virtual format, the rigor of study being more difficult than students anticipated, and a lack of student and faculty experience with the instruction mode. A simple sign test reveals that enrollment and attrition rates are both statistically greater in the online format (Conover 1980).

Table 1, Average Enrollment and Attrition Rates for Campus and Online Courses
Course Name Campus Course
Enrollment (Attrition)
Online Course
Enrollment (Attrition)
Financial Accounting 31 (22%) 40 (16%)
Accounting for Decision Making 43 (13%) 45 (16%)
Contemporary Economic Theory 11 (19%) 13 (23%)
Advanced Macroeconomic Theory 24 (15%) 26 (19%)
International Economics 13 (2%) 48 (3%)
Money and Capital Markets 14 (7%) 44 (14%)
Corporate Finance 36 (23%) 47 (36%)
Statistical Methods in Business 10 (13%) 14 (43%)
Quantitative Analysis in Business 33 (17%) 22 (33%)
Computer Information Technology 40 (7%) 38 (5%)
Managerial Marketing 11 (9%) 19 (24%)
Seminar in Marketing 23 (11%) 50 (14%)
Organizational Behavior 47 (13%) 31 (29%)
International Management 17 (26%) 44 (27%)
Strategic Management 24 (8%) 28 (7%)
Overall Average 25 (89%) 34 (21%)

The results shown in Table 1 indicate that some business disciplines are more conducive to attracting and retaining students than others are. Discipline-specific implications include the following:

Accounting
The basic accounting course (Financial Accounting) and the advanced accounting course (Accounting for Decision Making) both have higher online enrollment and attrition rates. Of primary interest is the observation that attrition rates in the two instruction modes are comparable, contradicting the notion that the detail-specific nature of accounting makes courses unconvertible to the online format.

Economics
The online versions of the basic economic course (Contemporary Economic Theory) and the advanced economic course (Advanced Macroeconomic Theory) both have higher enrollment and attrition rates than their classroom counterparts. The two field courses in economics (International Economics and Money and Capital Markets) both have online enrollments over three times greater than the campus equivalent, indicating an extreme interest in global economic courses delivered via the Internet.

Finance
The corporate finance course in the study had a substantially higher online enrollment and attrition rate than its classroom counterpart. The most glaring observation is the lack of retention in the online format. The attrition rate in the online finance course is an alarming 36 percent, indicating that one in three students who start the course do not complete it.

Business Statistics
Enrollment in the basic statistics course (Statistical Methods in Business) is slightly higher in the online mode, but enrollment in the advanced course (Quantitative Analysis in Business) is substantially higher in the campus mode. Attrition rates for the online statistics course are extremely high. The 43 percent attrition rate of the basic online statistics course is higher than that of any other course in the study and may have a lot to do with campus enrollment in the advanced statistics course being higher than the online counterpart.

Computer Information Systems
Enrollment and attrition rates for the Computer Information Technology business course are not significantly different across instruction modes. The online attrition rate of five percent is well below the overall average of 21 percent.

Marketing
The basic marketing course (Managerial Marketing) and the advanced marketing course (Seminar in Marketing) both have higher enrollment and attrition rates online than in the classroom. The advanced marketing course was offered four times during the study period and averaged 50 students per course, making it the most popular online course.

Management
The three management courses have atypical results. The online course in Organizational Behavior has a relatively high attrition rate with lower than average enrollment. Much like the global economic courses, enrollment in the field course in International Management is substantially higher in the online format. Enrollment and attrition rates for the MBA capstone course in Strategic Management are not significantly different across instruction modes.

Conclusions

If a university offers courses over the Internet, will anyone enroll in them? If students enroll in a Web-based course, will they complete it or be attrition casualties? The results of this study imply that online courses enroll more students, but suffer from higher attrition rates than traditional campus courses. It appears that the enrollment-augmenting advantages of Internet-based instruction, like making it easier to manage work and school and allowing more time with family and friends, are attractive to a significant number of graduate business students. The sustained higher enrollment across several business courses is a positive sign for the future of Internet-based instruction. On the other hand, attrition appears to be a problem with some of the online courses. Courses in the disciplines of accounting, economics, computer information systems, marketing, and management appear to be very conducive to the Internet format, as attrition rates are comparable to the campus equivalents. Courses in business statistics and finance, with attrition rates in excess of 30 percent, do not appear to be very well suited to the Internet instruction format. An obvious conclusion is that courses requiring extensive mathematics are difficult to convert to an Internet instruction format. It is important to note that results of this study are preliminary and represent a first step in an attempt to assess the effectiveness of Internet-based instruction. Much more research is needed before any definitive conclusions can be reached.

A Worst-Case MOO
"Students’ Distress with a Web-based Distance Education Course: An Ethnographic Study of Participants' Experiences"
http://www.slis.indiana.edu/CSI/wp00-01.html 

Noriko Hara SILS Manning Hall University of North Carolina at Chapel Hill Chapel Hill, North Carolina 27599 haran@ils.unc.edu  

Rob Kling The Center for Social Informatics SLIS Indiana University Bloomington, IN 47405 kling@indiana.edu  http://www.slis.indiana.edu/kling  (812) 855-9763

Many advocates of computer-mediated distance education emphasize its positive aspects and understate the kinds of communicative and technical capabilities and work required by students and faculty. There are few systematic analytical studies of students who have experienced new technologies in higher education. This article presents a qualitative case study of a web-based distance education course at a major U.S. university. The case data reveal a topic that is glossed over in much of the distance education literature written for administrators, instructors and prospective students: students' periodic distressing experiences (such as frustration, anxiety and confusion) in a small graduate-level course due to communication breakdowns and technical difficulties. Our intent is that this study will enhance understanding of the instructional design issues, instructor and student preparation, and communication practices that are needed to improve web-based distance education courses.

Bob Jensen's Comments
Th Hara and King study mentioned above focuses upon student messages, student evaluations, and instructor evaluations of a single course.  The interactive communications took place using MOO software that is sometimes used for virtual classroom settings, although the original intent of both MOO and MUD software was to create a virtual space in text in which students or game users create their own virtual worlds.  You can read more about MUD and MOO virtual environments at http://www.trinity.edu/~rjensen/245glosf.htm#M-Terms.  In some universities, MOO software has been used to create virtual classrooms.  In most instances, however, these have given way to multimedia virtual classrooms rather than entirely text-based virtual classrooms.  

MOO classrooms have been used very successfully.  For example, at Texas Tech University, Robert Ricketts has successfully taught an advanced tax course in a MOO virtual classroom when students are scattered across the U.S. in internship programs.  His course is not an internship course.  It is a tax course that students take while away from campus on internships.  Professor Ricketts is a veteran tax instructor and taught the MOO course under somewhat ideal conditions.  The students were all familiar with electronic messaging and they all know each other very well from previous onsite courses that they took together on the Texas Tech Campus in previous semesters.  They also had taken previous courses from Professor Ricketts in traditional classroom settings.

In contrast to Professor Ricketts'  MOO virtual classroom, the Hara and King study reported above is almost a worst-case scenario in a MOO virtual classroom.  The instructor was a doctoral student who had never taught the class before, nor had she ever taught any class in a MOO virtual classroom.  Half the class "had only minimal experience with computers" and had never taken a previous distance education course.  The students had never taken a previous course of any type from the instructor and did not know each other well.  The course materials were poorly designed and had never been field tested.  Students were hopelessly confused and did not deal well with text messaging (graphics, audio, and video were apparently never used in the course).  This seems utterly strange in an age where text, graphics, audio, and even video files can be attached to email messages.  It also seems strange that the students apparently did not pick up the telephone when they were so confused by the networked text messaging.

One of the most important things to be learned from the Hara and King study is the tendency for hopelessly confused students to often give up rather than keep pestering the instructor or each other until they see the light.  Instructors cannot assume that students are willing to air their confusions.  A major reason is a fear of airing their ignorance.  Another reason is impatience with the slowness of text messaging where everything must be written/read instead of having conversations with audio or full teleconferencing.

In summary, the Hara and King study is not so much a criticism of distance education as it is a study of student behavior in settings where the distance education is poorly designed and delivered.  A similar outcome is reported in "Student Performance In The Virtual Versus Traditional Classroom," by Neil Terry, James Owens and Anne Macy, Journal of the Academy of Business Education, Volume 2, Spring 2001 --- http://www.abe.villanova.edu/tocs01.html.  An earlier report on this topic appears in entitled "Student and Faculty Assessment of the Virtual MBA:  A Case Study,"  by Neil Terry, James Owens, and Anne Macy, Journal of Business Education, Volume 1, Fall 2000, 33-38 --- http://www.abe.villanova.edu/tocf00.html.  The article points out how badly many students want online MBA programs and how difficult it is to deliver an online program where students perform as well as in a traditional classroom.  In particular, too many things get confounded to evaluate the potential of online learning.  For example, faculty are seldom veterans in online delivery at this stage of development of online learning.  Faculty are often not top faculty who are so involved in research projects that they balk at having to develop online learning materials.  And the materials themselves are seldom ideal for online learning in terms of streaming audio/video, online mentors who are experts on the course topics, and daily interactive feedback regarding learning progress.  

The online degree program is from Texas A&M University (WT) in the Texas Panhandle.  The above Owens and Macy (2000) article points out that student evaluations of the program were quite low (1.92 on a five-point scale where 5.00 is the highest possible rating) but the perceived need of the program is quite high (3.30 mean outcome).  Over 92% of the students urged continuation of the program in spite of unhappiness over its quality to date.  In another survey, eight out of twelve faculty delivering the courses online "feel the quality of his/her virtual course is inferior to the quality of the equivalent campus course."  However, ten of these faculty stress that they "will significantly improve the quality of the virtual course the next time it is taught via the Internet format."  The above Owens and Macy (2001) study reports that online students had 14% lower test performance than the traditional classroom control group.  This is contrary to the University of Illinois SCALE outcomes where online students tend to perform as well or better.  See http://www.trinity.edu/rjensen/255wp.htm#Illinois.

A major complaint of the faculty is "the time required to organize, design, and implement a virtual course."  

This study is consistent with the many other startup online education and training programs.  The major problem is that online teaching is more difficult and stressful than onsite teaching.  A great deal of money and time must be spent in developing learning materials and course delivery has a steep learning curve for instructors as well as students.

A portion of the conclusion of the study is quoted below:

The results of this MBA case study present conflicted views about online instruction. Both the critics who worry about quality and the advocates who contend students want online courses appear to be correct based upon this case study.  While a majority of students acknowledge the benefits of Internet instruction, they believe that the online instruction is inferior to the traditional classroom.  A significant number of students are not satisfied with the Internet program and none of the students want an entirely virtual program.  However, most students want online instruction to continue and plan on enrolling in one or more future courses.  Faculty members recognize the flexibility advantage of Internet-based instruction but express concerns over the time-intensive nature of the instruction mode and the impact of student course evaluations on promotion and tenure.

The conclusions of this article are in line with my Advice to New Faculty at http://www.trinity.edu/rjensen/000aaa/newfaculty.htm 

You can read more about assessment of virtual courses in the "assessment" category at http://www.trinity.edu/rjensen/bookbob2.htm 

Reply from Patricia Doherty [pdoherty@BU.EDU

The New York Times had an article (I believe it was the Sunday, November 19, edition, that addressed the perception among recruiters of online MBA programs. The jist of it was that there are many mediocre programs, but a few very good ones. The students are enthusiastic about the benefits they provide, but the business community (i.e. the ones who the students hope will hire them) are still skeptical.

pat

Reply from Eckman, Mark S, CFCTR [meckman@att.com

Reading the comments on motivation reminded me of a quote from Bernard Baruch that tells me a lot about motivation.

"During my eighty-seven years I have witnessed a whole succession of technological revolutions. But none of them has done away with the need for character in the individual or the ability to think."

While character development and critical thinking may not be the most important items considered in development of curriculum or materials for the classroom, they can be brought into many accounting discussions in terms of ethical questions, creativity in application or simple 'what if' scenarios. People have many motivations. Sometimes you can motivate people, sometimes you can't. Sometimes motivations rise by themselves.

Thinking back to undergraduate times, I still remember the extreme grading scale for Accounting 101 from 1974. It started with 97-100 as an A and allowed 89 as the lowest passing grade. The explanation was that this was the standard the profession expected in practice. I also remember 60% of the class leaving when that scale was placed on the board! They had a different set of motivations.
Bernard Baruch

Bob Jensen's reply to a message from Craig Shoemaker 

Hi Craig,

You have a lot in common with John Parnell. John Parnell (Head of the Department of Marketing & Management at Texas A&M) opened my eyes to the significant thrust his institution is making in distance education in Mexico as well as parts of Texas. After two semesters, this program looks like a rising star.

Dr. Parnell was my "Wow Professor of the Week" on September 26, 2000 at http://www.trinity.edu/rjensen/book00q3.htm#092600 
You can read more about his program at the above website.

Congratulations on making this thing work. 

Bob (Robert E.) Jensen Jesse H. 
Email: rjensen@trinity.edu  http://www.trinity.edu/rjensen 

-----Original Message----- 
From: docshoe1 [mailto:docshoe1@home.com
Sent: Sunday, November 26, 2000 11:25 AM 
To: rjensen@trinity.edu Subject: Education -- Online

HI Bob,

I read with interest your note regarding online education. I just concluded teaching my first one. It was a MBA capstone course -- Buisness Planning Seminar. I had 16 students spread throughout the USA and Mexio. The course requirement was to write and present, online, a business plan consisting of a extensive marketing plan, operations plan and financial plan. Without knowing each other, the students formed teams of 4. The student commitment required 15-20 hours per week.

I held weekly conference calls with each team, extensively used chat rooms for online discussion and e-mailed some team nearly every day. The requirement of my time was at least twice that if I would have had one 3 1/2 hour class each week.

The written plans and the online presentations were quite thorough and excellent. The outcome was, in many ways, better due to the extensive and varied communications media used. My student evaluations were as high as when I have done the course "live" in class. The "upfront" work to prepare the course was extensive.

Craig

Craig Shoemaker, Ph.D. 
Associate Professor 
St. Ambrose University 
Davenport, Iowa


Some Technology Resources Available to Educators

"Accountability: Meeting The Challenge With Technology," Technology & Learning, January 2002, Page 32 --- http://www.techlearning.com/db_area/archives/TL/2002/01/accountb.html 


"Teaching College Courses Online vs. Face-to-Face," by Glenn Gordon Smith, David Ferguson, Mieke Caris. T.H.E. Journal, April 2001, pp. 18-26.   http://www.thejournal.com/magazine/vault/A3407.cfm 

We interviewed 21 instructors who had taught both in the distance and the face-to-face format. The instructors ranged from assistant professors to adjunct professors. Fifteen of the 21 instructors taught in the context of the SUNY Learning Network, a non-profit, grant-funded organization that provides the State Universities of New York (SUNY) with an infrastructure, software, Web space and templates for instructors to create their online course. The Learning Network also provides workshops on developing and teaching online courses, a help desk and other technical support for Web-based distance education. The remaining six informants taught Web-based distance education courses in similarly supported situations at state universities in California and Indiana.

. . .

Once the course begins, the long hours continue. Online instructors must log on to the course Web site at least three or four times a week for a number of hours each session. They respond to threaded discussion questions, evaluate assignments, and above all answer questions clearing up ambiguities, often spending an inordinate amount of time communicating by e-mail. The many instructor hours spent online create an "online presence," a psychological perception for students that the instructor is out there and is responding to them. Without this, students quickly become insecure and tend to drop the class.

This great amount of work sounds intimidating; however, most online instructors looked forward to their time spent online as time away from their hectic face-to-face jobs. One respondent commented: "This is why I like the online environment. It's kind of a purified atmosphere. I only know the students to the extent of their work. Obviously their work is revealing about them."

The Web environment presents a number of educational opportunities and advantages over traditional classes, such as many informational resources that can be seamlessly integrated into the class. Instructors can assign Web pages as required reading, or have students do research projects using online databases. However, it is important that the instructor encourage the students to learn the skills to differentiate valid and useful information from the dregs, as the Internet is largely unregulated.

Some instructors also had online guests in their classes (authors, experts in their field, etc.) residing at a distance, yet participating in online threaded discussions with the students in the class. All these things could theoretically be accomplished in a traditional class by adding an online component; however, because online classes are already on the Web, these opportunities are integrated far more naturally.

Other advantages of online classes result from psychological aspects of the medium itself. The emphasis on the written word encourages a deeper level of thinking in online classes. A common feature in online classes is the threaded discussion. The fact that students must write their thoughts down, and the realization that those thoughts will be exposed semi-permanently to others in the class seem to result in a deeper level of discourse. Another response stated:

"The learning appears more profound as the discussions seemed both broader and deeper. The students are more willing to engage both their peers and the professor more actively. Each student is more completely exposed and can not simply sit quietly throughout the semester. Just as the participating students are noticeable by their presence, the non-participating students are noticeable by their absence. The quality of students' contributions can be more refined as they have time to mull concepts over as they write, prior to posting."

The asynchronous nature of the environment means that the student (or professor) can read a posting and consider their response for a day before posting it. Every student can and, for the most part, does participate in the threaded discussions. In online classes, the instructor usually makes class participation a higher percentage of the class grade, since instructor access to the permanent archive of threaded discussions allows more objective grading (by both quantity and quality). This differs from face-to-face classes where, because of time constraints, a relatively small percentage of the students can participate in the discussions during one class session. Because of the lack of physical presence and absence of many of the usual in-person cues to personality, there is an initial feeling of anonymity, which allows students who are usually shy in the face-to-face classroom to participate in the online classroom. Therefore it is possible and quite typical for all the students to participate in the threaded discussions common to Web-based classes.

This same feeling of anonymity creates some political differences, such as more equality between the students and professor in an online class. The lack of a face-to-face persona seems to divest the professor of some authority. Students feel free to debate intellectual ideas and even challenge the instructor. One respondent stated that "In a face-to-face class the instructor initiates the action; meeting the class, handing out the syllabus, etc. In online instruction the student initiates the action by going to the Web site, posting a message, or doing something. Also, I think that students and instructors communicate on a more equal footing where all of the power dynamics of the traditional face-to-face classroom are absent."

Students are sometimes aggressive and questioning of authority in ways not seen face-to-face. With the apparent anonymity of the Internet, students feel much freer to talk. "Students tended to get strident with me online when they felt frustrated, something that never happened in face-to-face classes because I could work with them, empathize and problem solve before they reached that level of frustration," noted one respondent.

In the opening weeks of distance courses, there is an anonymity and lack of identity which comes with the loss of various channels of communication. Ironically, as the class progresses, a different type of identity emerges. Consistencies in written communication, ideas and attitudes create a personality that the instructor feels he or she knows.

"Recently I had printed out a number of student papers to grade on a plane. Most had forgotten to type their names into their electronically submitted papers. I went ahead and graded and then guessed who wrote each one. When I was later able to match the papers with the names, I was right each time. Why? Because I knew their writing styles and interests. When all of your communication is written, you figure out these things quickly."

This emergence of online identity may make the whole worry of online cheating a moot point. Often stronger one-to-one relationships (instructor-student and student-student) are formed in online courses than in face-to-face classes.

Conclusions

Contrary to intuition, current Web-based online college courses are not an alienating, mass-produced product. They are a labor-intensive, highly text-based, intellectually challenging forum which elicits deeper thinking on the part of the students and which presents, for better or worse, more equality between instructor and student. Initial feelings of anonymity notwithstanding, over the course of the semester, one-to-one relationships may be emphasized more in online classes than in more traditional face-to-face settings.

With the proliferation of online college classes, it is important for the professor to understand the flavor of online education and to be reassured as to the intellectual and academic integrity of this teaching environment.

Bob Jensen's Recap:


"Distance Learning in Accounting:  A Comparison Between a Distance and a Traditional Graduate Accounting Class," by Margaret Gagne and Morgan Shepherd, T.H.E. Journal, April 2001, pp. 58-65 --- http://www.thejournal.com/magazine/vault/A3433.cfm 

This study analyzed the performance of two class sections in an introductory graduate level accounting course in the fall semester of 1999. One section was a traditional, campus-based class taught in the conventional face-to-face lecture mode. The other section was taught in a distance education format. In the distance class, the students had no face-to-face contact with each other or the instructor. The distance students could communicate via telephone, e-mail, threaded bulletin board discussions and synchronous chat technologies. Except for the textbook, the distance class received all material for the course over the Internet. The distance section received supplemental administrative and course information, e.g., solutions to assigned problems, via the Web. These materials were distributed to the campus-based students during class.

To enhance comparability, the same text, syllabus, assignments and examinations were used in both classes. The professor (who has over 12 years of experience teaching accounting) taught both sections.

The traditional section met once a week over a 17-week semester. Each class lasted two and a half hours. During class, approximately half of the time was spent presenting and explaining material from the text; the remaining class time was used to go over the assigned homework problems.

The distance section never formally met during the same 17-week period. In an effort to provide more of a "class" feeling, the students and instructor placed profiles on the class Web site. These profiles were intended to give a personal and professional perspective of the individuals. They included information such as work history, family history, favorite hobbies, geographic location, and other miscellaneous information that may help give a sense of who the student is. Many participants uploaded a picture to give others more of an idea of who they are.
. . .

Summary

The findings of this paper supported prior research: the performance of students in a distance course was similar to the performance of students in the on-campus course for an introductory accounting graduate class. Furthermore, the students' evaluations of the course were similar, although students in the online course indicated that they were less satisfied with instructor availability than the in-class students. In terms of student performance, there did not seem to be a difference between the multiple choice exam format and the complex problem solving exam format.

Future research in this area should center on the issue of improving student perception of instructor availability. Is a richer medium required (i.e. video), or can certain procedures be incorporated to help students feel as if the instructor is more available? This theme can be carried out across different subjects to see if some subjects are more prone to the student perception problem than others. At least in this graduate level introductory accounting course, it appears as if distance education delivery is as effective as the traditional campus methodology in terms of student learning outcomes.


From Infobits on September 28, 2001

ONLINE LEARNING VERSUS CLASSROOM LEARNING

Much research into the efficacy of online learning over classroom learning has been anecdotal and of questionable quality, leading to inconclusive results and the need for further study. Two recent articles in the JOURNAL OF INTERACTIVE INSTRUCTION DEVELOPMENT address this question of efficacy.

Terrence R. Redding and Jack Rotzein ("Comparative Analysis of Online Learning Versus Classroom Learning," Journal of Interactive Instruction Development, vol. 13, no. 4, Spring 2001, pp. 3-12) compare the learning outcomes associated with three classroom groups and an online community college group in pre-licensing insurance training. They conclude that "online instruction could be highly effective" and that a "higher level of cognitive learning was associated with the online group." They also note that higher achievements of the online group can be attributed to the self-selected nature of the students, the instructional design of the online course, and the motivation associated with adult learners. Redding and Rotzein recommend that further studies be conducted in other fields of study to see if their results can be replicated in other professions or disciplines.

In the same issue Kimberly S. Dozier (Assistant Professor of English, Dakota State University) urges restraint in rushing to replace traditional classroom courses with online classes ("Affecting Education in the On-Line 'Classroom': The Good, the Bad, and the Ugly," ," Journal of Interactive Instruction Development, vol. 13, no. 4, Spring 2001, pp. 17-20). She cautions educators "not to forget what makes us teachers and what makes us learners. We must not forget the limitations of technology and we must not assume that an on-line course duplicates a traditional course." One of the aspects of learning that she fears may be missing in some online learning experiences is self-reflection as students are "simply responding to a specified task and moving on to the next one."

Note: neither article is available on the Web. Check with your college or university library to obtain copies.

Journal of Interactive Instruction Development [ISSN 1040-0370] is published quarterly by the Learning Technology Institute, 50 Culpeper Street, Warrenton, VA 20186 USA; tel: 540-347-0055; fax: 540-439-3169; email: info@lti.org; Web: http://www.lti.org/ 


From Syllabus News on October 18, 2002

Online Nurse Ed Service Accredited in 50 States

eMedicine Inc., an online service for health care professionals, said it received approval to offer accredited nursing continuing education in California, and can now offer accredited nursing continuing education courses in all 50 states. The service offers over 40,000 hours of continuing education for nurses, physicians, pharmacists and optometrists, of which 10,000 hours are available for nurses. Accreditation for eMedicine nursing CE is provided through the University of Nebraska Medical Center's College of Nursing Continuing Nursing Education program. Catherine Bevil, director of continuing nursing education in UNMC’s College of Nursing, said the service’s “large audience and commitment to creating current clinical information … provides an effective outlet for delivering UNMC's College of Nursing continuing nursing education courses.”

 


Success Stories in Education Technology

LearningSoft Awarded Patent for Adaptive Assessment System

From T.H.E. Journal Newsletter on March 30, 2006
The U.S. Patent and Trademark Office has granted LearningSoft LLC ( http://www.learningsoft.net ) a patent titled "Adaptive Content Delivery System and Method," which covers the company's proprietary Learningtrac adaptive assessment system. Learningtrac uses artificial intelligence to optimize assessment and test preparation for individual students' strengths and weaknesses. The system uses a student's own knowledge base, learning patterns, and measures of attention to the material to continually adapt curriculum content to the student's needs and spur skill development. Educators are then able to monitor individual student assessments as well as track classroom progress. Later this year, Learningtrac will be integrated into LearningSoft's Indigo Learning System, which is debuting at the 2006 Florida Educational Technology Conference.


Integrate Technology into Lesson Plans

"Better teaching with technology:  Program aims to help integrate technology into lesson plans, by Micholyn Fajen, The Des Moines Register, February 8, 2005 --- http://desmoinesregister.com/apps/pbcs.dll/article?AID=/20050208/NEWS02/502080339/1004 

Educators from two Waukee elementary schools will learn new ways to implement technology into classroom curriculum by participating in free training sessions through Heartland Area Education Agency.

Technology teams from both Brookview and Eason elementary schools will attend technology integration mentoring, part of a three-year-old Heartland program offered to area schools. This is Waukee's first time attending at the elementary level.

"The intent of the program is to provide participants with skills and strategies that prepare them to mentor other educators in the technology integration process," said Cindi McDonald, principal of Brookview.

Building principals and district administrators began instruction Wednesday. Meetings will continue into June.

The training will help educators learn to get the most of the technology they use.

"I've worked in six different districts and can say Waukee is very blessed to have a lot of hardware functioning here," McDonald said. "We have a commitment to have the computers, teachers and staff positioned in a way that we can make a difference."

Brian Pierce, technology teacher at Brookview Elementary in West Des Moines, hopes to gain more tactics that help him approach classroom teachers and show them how to integrate the skills into everyday learning.

"We have a mobile computer lab with 15 laptops teachers can pull into the classroom," Pierce said. "We want to optimize this lab with kindergarten through fifth-grade teachers and find curriculums and technological links to make that happen."

Some Brookview classrooms already integrate technology into their homework. Fourth-graders recently assigned a report of a famous person are researching information over the Internet and creating presentations on the computer.

Pierce is teaching the students how to drop their presentations into Power Point and will help them burn a CD so they can take the work home to show parents.

"We have some good and effective uses of technology here," McDonald said. "There are pockets of greatness, but we still need to build a common vocabulary among teachers. Our kindergarten through second grade still struggle in that area."

Three out-of-state technology consultants were brought in to teach the program and of 55 school districts in Heartland's region, 15 districts have teams that will attend, coming from as far as Carroll.

"We've had good responses from past participants," said Tim Graham, director of Heartland technology services. "This year we've modified the program to include administrators because teachers found they needed upper-level support of the programs. Administrators needed a better understanding of how important technology is in the classroom."

Continued in the article


Teens praise online algebra lessons, March 30, 2004 --- http://the.honoluluadvertiser.com/article/2004/Mar/30/ln/ln17a.html 

The school has enough textbooks, but the students don't need them in Yvette McDonald's algebra class at Kahuku High and Intermediate School.

Kahuku students, from left, Brendan Melemai, Daesha Johnson and James Bautista use computers instead of books in algebra class. The interactive computer program was developed last year by Honolulu Community College. Jeff Widener • The Honolulu Advertiser

And that's a good thing.

It's because her students in grades 9 through 12 learn math not with books but through an interactive computer program developed last year by Honolulu Community College and being piloted in four Hawai'i high schools, a middle school and a community college.

With this new approach, the hope is to boost high school math scores and cut down on expensive and time-consuming remedial math in college.

"It's pretty good," said 17-year-old James Bautista Jr., peering intently at the algebra equation on the screen before choosing the correct answer from several suggestions.

"Sometimes teachers make it harder than it really is. If I see it first and try to understand it myself without the teacher dictating, it's kind of better. When I'm pressured into it, I'm not good. I'm better at this where I can take my time."

While it's too soon to know if this online algebra class will improve high school math scores, end-of-semester assessment testing at HCC in mid-May will show how it's working among college students. Assessment testing will be done in high schools next year.

"They should have this at Waialua," Bautista said. "I failed math at Waialua twice — algebra and geometry. The teacher's a cool guy, but he's so quick I had a hard time keeping up with him."

"It's so much easier," agrees 17-year-old Francisco "Pancho" Peterson. "If you click on the magnifying glass, it shows you the procedure of what you should know, and that helps a lot. It shows you what to do. In a way, it's like a big cheat sheet to figure out what you did."

Continued in the article

 


From the June 25 edition of Syllabus News

Wharton webCafe Earns High Satisfaction Ratings

A survey of students of the Wharton School at the University of Pennsylvania found that 97 percent rated the school's web-based virtual meeting application -- dubbed web Cafe -- as valuable to their education experience. Since Wharton began using webCafe in 1998 as part of the school's student intranet, use of webCafe has expanded to 5,200 users, 99 percent of full-time MBA candidates, all executive MBA students, and almost all Wharton undergraduates. webCafe is one component of Wharton's plan to reshape its business education. The school's Alfred West Jr. Learning Lab is exploring methods of learning and instruction using interactive multimedia and real-time simulations. This August, it is opening Jon M. Huntsman Hall, which Wharton claims will be the largest and most sophisticated instructional technology center at any business school.

For more information, visit: http://www.wharton.upenn.edu/learning


Top K12's 100 Wired Schools --- http://FamilyPC.com/smarter.asp 
The winners are listed at http://familypc.com/smarter_2001_top.asp 

Why (Some) Kids Love School --- http://familypc.com/smarter_why_kids.asp 

Dropout rates are down and test scores are up. Students are engaged in learning and their self-esteem is soaring. So what's really going on within the classroom walls of the country's top wired schools? By Leslie Bennetts

Once upon a time, back in the olden days, kids used to exult about getting out of school, celebrating their release from drudgery by singing "No more pencils, no more books!" or so the schoolyard ditty would have it. These days, with the explosion of technology that's revolutionizing education around the country, many students are now eager to stay after school, competing for access to all the high-tech equipment that's opening up so many new opportunities to them.

For younger kids, technology is transforming the schoolwork their older siblings sometimes regarded as tedious into challenging games and activities. For high-school students, technology may banish once and for all the tired questions about relevance. Even the most rebellious adolescents are aware of the real-world value of the skills and experience they're getting in wired schools.

Teachers who have mastered the art of integrating technology into the curriculum also deserve credit. For a closer look at some of the ways educators are transforming American schools, here are six outstanding examples from this year's Top 100 Wired Schools—two elementary, two middle, and two high schools that have applied creativity as well as resources to the educational challenges of the 21st century.


"Using Hypertext in Instructional Material:  Helping Students Link Accounting Concept Knowledge to Case Applications," by Dickie Crandall and Fred Phillips, Issues in Accounting Education, May 2002, pp. 163-184 --- http://accounting.rutgers.edu/raw/aaa/pubs.htm 

We studied whether instructional material that connects accounting concept discussions with sample case applications through hypertext links would enable students to better understand how concepts are to be applied to practical case situations. Results from a laboratory experiment indicated that students who learned from such hypertext-enriched instructional material were better able to apply concepts to new accounting cases than those who learned from instructional material that contained identical content but lacked the concept-case application hyperlinks.  Results also indicated that the learning benefits of concept-case application hyperlinks in instructional material were greater when the hyperlinks were self-generated by the students rather than inherited from instructors, but only when students had generated appropriate links.  When students generated inappropriate concept-case application hyperlinks in the instructional material, the application of concepts to new cases was similar to that of other students who learned from the instructional material that lacked hyperlinks.


The 2002 CWRL Colloquium (Computers, Writing, Research, and Learning) --- http://www.cwrl.utexas.edu/currents/ 

In this issue
The CWRL Colloquium: A Window into the World of Computer-enhanced Teaching and Learning
by M. A. Syverson, CWRL Director

Teaching with technology
Collaborative Teaching in the Computer Classroom
by Alexandra Barron


Comparing Traditional and Computer-assisted Composition Classrooms
by Sarah R. Wakefield

Converting to the Computer Classroom: Technology, Anxiety, and Web-based Autobiography Assignments
by Miriam Schacht


Multimedia development, multi-user domains, and role-playing
Hell Wasn't Built in a Day: Taking the Long View on Multimedia Development
by Olin R. Bjork


Virtual Spaces, Actual Practices: MOO Pedagogy in the CWRL
by Aimee Kendall and Doug Norman


Playing Doctors, Playing Patients: Multi-user Domains and the "Teaching" of Illness
by Lee Rumbarger


Role-playing Situations Improve Writing
by M. A. Syverson

Interpreting languages; imagining disability
Towards a Hermeneutic Understanding of Programming Languages
by Clay Spinuzzi

The Imagination Gap: Making Web-based Instructional Resources Accessible to Students and Colleagues with Disabilities
by John Slatin


There are also links to archived issues!

 


Teaching Versus Research
"Favorite Teacher" Versus "Learned the Most"

It may take years for a graduate to change an evaluation of an instructor
One of Sanford's key points is that it may take years for a student to fully appreciate the quality of his or her education. What might have seemed tedious, dull, or unimportant at the time may, in the long run, turn out to be more valuable to a person's life than that which seemed immediate and exciting in the classroom. Unfortunately, as Sanford notes, that long-term value often is not captured in the immediacy of student evaluations of instruction. Wise department chairs and deans take that into account when reviewing those evaluations. But, here at Krispy Kreme U. not all department chairs and deans are wise.
Mark Shapiro commenting on a piece by Sanford Pinsker, "You Probably Don't Remember Me, But....," The Irascible Professor, July 12, 2006 --- http://irascibleprofessor.com/comments-07-12-06.htm

Bob Jensen's threads on higher education controversies are at http://www.trinity.edu/rjensen/HigherEdControversies.htm


In the movie “Ghostbusters,Dan Aykroyd commiserates with Bill Murray after the two lose their jobs as university researchers. “Personally, I like the university. They gave us money and facilities, and we didn’t have to produce anything. You’ve never been out of college. You don’t know what it’s like out there. I’ve worked in the private sector. They expect results.” I can find some amusement in this observation, in a self-deprecating sort of way, recognizing that this perception of higher education is shared by many beyond the characters in this 1980s movie.
Jeremy Penn, "Assessment for ‘Us’ and Assessment for ‘Them’," Inside Higher Ed, June 26, 2007 --- http://www.insidehighered.com/views/2007/06/26/penn


Why then do the studies show that a faculty member's research activity and his or her teaching performance basically are uncorrelated (neither positively correlated nor negatively correlated)? My best guess is that these studies have fundamental flaws. After reading some of Nils' references as well as more recent work on the subject, I believe that most of these studies measure both teaching effectiveness and research activity incorrectly. On the teaching effectiveness side, student evaluations of teaching often are the only measure used in those studies; and, on the research productivity side generally only numbers of publications are counted. Neither of these data points really measure quality. The student evaluations often are highly correlated with the grade that a student expects to receive rather than how much the student has learned. Faculty members who are engaged in research often are demanding of themselves as well as their students, so that may skew their student evaluations. Measuring research activity by the number of papers published tends to skew the results towards those faculty members who would view themselves primarily as researchers and teachers of graduate students rather than as teacher scholars who devote as much effort to their teaching as to their research. In fact one of the correlations observed in the research is that those faculty members who publish the most often have less time available to devote to their teaching.
Nils Clausson, "Is There a Link Between Teaching and Research?" The Irascible Professor, December 30, 2004 --- http://irascibleprofessor.com/comments-12-30-04.htm 
Jensen Comment:  By definition successful research is a contribution to new knowledge.  It cannot be conducted without scholarship mastery of existing knowledge on the topic at hand.  What Clausson seems to imply is that a great teacher can have terrific and ongoing scholarship without adding to the pile of existing knowledge.  There also is the question of great facilitators of research who do not publish.  These professors are sometimes great motivators and advisors to doctoral students.  Examples from accounting education include the now deceased Carl Nelson at the University of Minnesota and Tom Burns from The Ohio State University.  My point is that great teachers come in all varieties.  No one mold should ever be prescribed like is often done in today's promotion and tenure committees that sometimes discourage fantastic teaching in favor of uninteresting publication.

December 31, 2004 reply from Amy Dunbar [Amy.Dunbar@BUSINESS.UCONN.EDU

Shapiro stated, “No one became an astronomer, or an economist, or and English professor in order to teach students astronomy, economics, or English literature. I certainly didn't.”

Au contraire. I think a lot of us entered PhD programs because we wanted to teach. I think that teaching and research are positively correlated because scholarship is infused with curiosity and care.

Amy Dunbar

UConn

December 31, 2004 reply from David Fordham, James Madison University [fordhadr@JMU.EDU

Hear, hear! Amy, I agree. Shapiro's assertion that "no one" gets a Ph.D. to teach is patently false, as evidenced by the overwhelming majority of my colleagues who obtained a Ph.D. degree SOLELY to obtain a teaching position (and many of whom eschew the superficiality of much of today's published accounting "research").

Thus, if this statement of Shapiro's is false, why would I believe his other statements that "not one shred of empirical evidence" exists to relate good teaching to good research? That statement is likely patently false too, mainly because, in my view, of issues with construct validity issues in the studies. Look carefully at the precise wording of my following postulates:

1. Good research does not necessarily guarantee good teaching. (I believe anyone who has any experience in academe would have to accept this as well-established and supported empirically.)

2. Good teaching does not necessarily require research ... DEPENDING (a major qualifier!) ON WHAT you are trying to teach. THIS second postulate (more specifically, the qualifying predicate!) is the one that most of Shapiro's citations (I assume, since I must admit I haven't read them!) likely overlooked in their studies.

There are many subjects, including MANY undergraduate course topics, which do not require constant updating and up-to-the- minute currency, and thus a teacher may not benefit as greatly from being active in research in that area. For these, research does not have to be correlated to good teaching. So it probably isn't.

But there are many other areas which probably can NOT be taught properly by anyone who is NOT staying current with the field by being actively immersed in the present state-of- the-art. Medicine, Pharmacology, Genetics, Materials Science, shoot, any one of us could name dozens. And these fields do not need empirical evidence, it is deducible by pure logic, from the objectives of the teaching activity.

And even in these fields, doing good research does not necessarily mean that you are a good teacher, but being a good teacher in the field does require research.

By overlooking the characteristics of the field, the characteristics of the course content, characteristics of the NEED of students in the course, and similar oversights, Shapiro's researchers have confounded their data so much that their conclusion (the lack of correlation between research and teaching) lacks validity, even ignoring the obvious problems with measurements that Bob pointed out.

Of course, Shapiro is a primary example of a phenomenon I plan to be one of my best assertions: the complete replacement of "factual reporting" with "sensationalism" in today's communication realm.

I mean, honestly, why should accountants be different from the rest of the world when it comes to abdicating the obligation to report fairly, justly, objectively, and factually? The news media sure does not report objectively (the New York Times and its affiliate the Herald Tribune are absolute jokes when it comes to embellishment, sensationalizing, biasing, coloring, and other departures from "reporting news", and they are representative of their industry). Neither do other forms of so-called "news" media, nor do practitioners of law (look at the claims of civil rights attorneys!), politicians (nothing more need be said here), so-called "reality TV", or any of the other professions which the public (erroneously) is expected to perceive as communicating reality. So why should accountants be held to a different standard than the rest of society?

Rhetorical question, of course...

Happy New Year to anyone who reads this far on my lengthy treatises. And Happy New Year to the others on this list, too!

David Fordham 
James Madison University

 

January 1, 2005 reply from Bob Jensen

Hi David,

I tend to agree with everything you said below except the key phrase "being a good teacher in the field does require research."

It would be more acceptable to me if you fine tuned the phrase to read "being a good teacher in the field does keeping up with research."  Of course this leads head on into the brick wall of performance reward systems that find it easier to count publications than subjectively evaluate scholarship.

A terrific surgeon or teacher of surgery is not required to contribute to new and unknown surgical knowledge and/or technique. A surgical researcher may spend a lifetime endeavoring to create a new surgical technique but that endeavoring is not a requisite for greatness as a teacher of existing best practices.  In teaching of surgery, experience is the requisite for greatness as a teacher of existing best practices.

Nor does a great historian or history teacher have to contribute to new knowledge of the past in order to have an outstanding preparation to teach what is already known about the past.  Although researchers are almost paranoid to admit it, it is possible to become the world's best scholar on a topic without extending the knowledge base of the topic.

The problem with great research discovery is that endeavoring to discover often drains a lifetime of energy at the edge of the head of a pin, energy that has a high probability of draining efforts to prepare to teach about the whole pin or the pin cushion as a whole.

The key problem is having the time or energy for preparation to teach.  Research in the narrow sometimes drains from the act of preparing to teach in breadth and length.  Also knowing the history of the narrows does not necessarily mean that the researcher understands the history of the entire river (which is my feeling about some of our top empirical researchers in accounting who have very little knowledge of the history of accounting as a whole). 

Rivers versus pin cushions!  Am I mixing my metaphors again?

I agree that Shapiro made a dumb comment about why we got our doctorates and became educators. I tend to agree, however, with Nils Clausson's conclusion that seems to be lost behind Shaphiro's dumb remark.

Bob Jensen

January 1, 2005 reply from Alexander Robin A [alexande.robi@UWLAX.EDU

Wonderful! It is so nice to see these very reasonable ideas articulated. The idea that keeping up in a field in order to teach it requires active research (actually, publication numbers) rather than active reading and study is one of those unquestioned mantras that comprise educational mythology at most universities. I suspect the true reason for that belief is that it is convenient - bureaucracies like easy measurements that don't require much discernment. Counting publications is a very easy (if erroneous) way to measure faculty performance.

Robin Alexander

January 1, 2005 reply from Dennis Beresford [DBeresfo@TERRY.UGA.EDU

Bob

I wonder whether you could further fine tune your comment to say, "being a good teacher does require keeping up with developments in the field." While keeping up with research is certainly helpful, the vast majority of accounting majors are undergrads and MAcc's who will go into (mainly) public accounting and the corporate world. And, of course, many of our accounting students are taking the class only as a requirement of a different business major. I respectfully submit that knowing what is happening in the accounting profession and broader business community is quite important to effective teaching of those students.

Some accounting research may also be relevant, particularly for teaching PhD students but that's a pretty tiny number.

Go Bulldogs and Trojans!

Denny Beresford


Cold and distant teaching vs. warm and close
Many instructors struggle with the role of rapport in teaching. For some, the response is a cool and distant teaching style. This essay argues that a style of appropriate warmth can promote student learning. It offers definitions, examples, and implications for the instructor.
Robert F. Bruner, "'Do you Expect Me to Pander to the Students?' The Cold Reality of Warmth in Teaching," SSRN Working Paper, June 2005 --- http://ssrn.com/abstract=754504 


As I said previously, great teachers come in about as many varieties as flowers.  Click on the link below to read about some of the varieties recalled by students from their high school days.  I t should be noted that "favorite teacher" is not synonymous with "learned the most."  Favorite teachers are often great at entertaining and/or motivating.  Favorite teachers often make learning fun in a variety of ways.  

The recollections below tend to lean toward entertainment and "fun" teachers, but you must keep in mind that these were written after-the-fact by former high school teachers.  In high school, dull teachers tend not to be popular before or after the fact.  This is not always the case when former students recall their college professors.

"'A dozen roses to my favorite teacher," The Philadelphia Inquirer, November 30, 2004 --- http://www.philly.com/mld/Inquirer/news/special_packages/phillycom_teases/10304831.htm?1c 

Students may actually learn the most from pretty dull teachers with high standards and demanding assignments and exams.  Also dull teachers may also be the dedicated souls who are willing to spend extra time in one-on-one sessions or extra-hour tutorials that ultimately have an enormous impact on mastery of the course.  And then there are teachers who are not so entertaining and do not spend much time face-to-face that are winners because they have developed learning materials that far exceed other teachers in terms of student learning because of those materials.  

In some cases, the “best learning” takes place in courses where students hate the teacher who, in their viewpoint, does not teach.  In has a lot to do with metacognition in learning.  See http://www.trinity.edu/rjensen/265wp.htm 

Many of our previous exchanges on the AECM about these issues are at the following links:

Grade Inflation Issues
  http://www.trinity.edu/rjensen/assess.htm#GradeInflation 

Onsite Versus Online Learning
  http://www.trinity.edu/rjensen/assess.htm#OnsiteVersusOnline 

Student Evaluations and Learning
 http://www.trinity.edu/rjensen/assess.htm#LearningStyles  

January 2, 2005 reply from MABDOLMOHAMM@BENTLEY.EDU 

In search of a definition of a "perfect teacher"

A teacher can be excellent by having one or more of a number of attributes (e.g., motivator, knowledgeable, researcher), but a teacher will be perfect if he/she has a combination of some or all of these attributes to bring the best out of students.

We all can cite anecdotal examples of great teachers that exhibited excellence in an important attribute. Below are three examples.

A few years ago an economics teacher of my son in high school admitted to parents at the beginning of the year that he had very little knowledge of the subject matter of the economics course that he was assigned to teach. He said that the school needed a volunteer to take an economics course and then teach it at the high school, and he volunteered. A former college football player, the teacher had learned to motivate others to do their best and by the end of the year he had motivated the students to learn a lot, much of it on their own. In fact to the pleasant surprise of the teacher and parents two student groups from this class made it to the state competition, one of which ended up being number one and the other ranked number 4 in the state.

I have noticed that many of the professors getting teaching awards from our beloved AAA have also been heavy hitters in publishing. While one can be a good teacher without being a heavy hitter in publishing, it may be that scholarship, broadly defined (include the knowledge of current developments) is an important factor in being a good teacher. Even if one is not a heavy hitter, scholarship as an exercise of the brain, makes one a better teacher.

Others argue that students are the best judges of good teaching. I recall having read a research piece some time ago that those who consistently rate high in student evaluations are good teachers, while those who consistently rate low are poor teachers. The ones in the middle are those for whom other factors may be at work (e.g., being too demanding or a tough grader).

Here is a research question: Do we have a comprehensive inventory of the attributes of good teaching, and if so, is it possible to come up with combinations of various attributes to define a "perfect teacher" or an "expert teacher"?

Ali Mohammad J. Abdolmohammadi, DBA, CPA 
http://web.bentley.edu/empl/a/mabdolmohamm/  
John E. Rhodes Professor of Accounting 
Bentley College 175 Forest Street Waltham, MA 02452

January 2, 2005 reply from Van Johnson [accvej@LANGATE.GSU.EDU

Bob--

Your post reminded me of one of my favorite editorials by Thomas Sowell in 2002. It is included below.

"Good" Teachers

The next time someone receives an award as an outstanding teacher, take a close look at the reasons given for selecting that particular person. Seldom is it because his or her students did higher quality work in math or spoke better English or in fact had any tangible accomplishments that were better than those of other students of teachers who did not get an award.

A "good" teacher is not defined as a teacher whose students learn more. A "good" teacher is someone who exemplifies the prevailing dogmas of the educational establishment. The general public probably thinks of good teachers as people like Marva Collins or Jaime Escalante, whose minority students met and exceeded national standards. But such bottom line criteria have long since disappeared from most public schools.

If your criterion for judging teachers is how much their students learn, then you can end up with a wholly different list of who are the best teachers. Some of the most unimpressive-looking teachers have consistently turned out students who know their subject far better than teachers who cut a more dashing figure in the classroom and receive more lavish praise from their students or attention from the media.

My own teaching career began at Douglass College, a small women's college in New Jersey, replacing a retiring professor of economics who was so revered that I made it a point never to say that I was "replacing" him, which would have been considered sacrilege. But it turned out that his worshipful students were a mass of confusion when it came to economics.

It was much the same story at my next teaching post, Howard University in Washington. One of the men in our department was so popular with students that the big problem every semester was to find a room big enough to hold all the students who wanted to enroll in his classes. Meanwhile, another economist in the department was so unpopular that the very mention of his name caused students to roll their eyes or even have an outburst of hostility.

Yet when I compared the grades that students in my upper level class were making, I discovered that none of the students who had taken introductory economics under Mr. Popularity had gotten as high as a B in my class, while virtually all the students who had studied under Mr. Pariah were doing at least B work. "By their fruits ye shall know them."

My own experience as an undergraduate student at Harvard was completely consistent with what I later learned as a teacher. One of my teachers -- Professor Arthur Smithies -- was a highly respected scholar but was widely regarded as a terrible teacher. Yet what he taught me has stayed with me for more than 40 years and his class determined the course of my future career.

Nobody observing Professor Smithies in class was likely to be impressed by his performance. He sort of drifted into the room, almost as if he had arrived there by accident. During talks -- lectures would be too strong a word -- he often paused to look out the window and seemingly became fascinated by the traffic in Harvard Square.

But Smithies not only taught us particular things. He got us to think -- often by questioning us in a way that forced us to follow out the logic of what we were saying to its ultimate conclusion. Often some policy that sounded wonderful, if you looked only at the immediate results, would turn out to be counterproductive if you followed your own logic beyond stage one.

In later years, I would realize that many disastrous policies had been created by thinking no further than stage one. Getting students to think systematically beyond stage one was a lifetime contribution to their understanding.

Another lifetime contribution was a reading list that introduced us to the writings of top-notch minds. It takes one to know one and Smithies had a top-notch mind himself. One of the articles on that reading list -- by Professor George Stigler of Columbia University -- was so impressive that I went to graduate school at Columbia expressly to study under him. After discovering, upon arrival, that Stigler had just left for the University of Chicago, I decided to go to the University of Chicago the next year and study under him there.

Arthur Smithies would never get a teaching award by the standards of the education establishment today. But he rates a top award by a much older standard: By their fruits ye shall know them.

January 2, 2004 reply from David Fordham, James Madison University [fordhadr@JMU.EDU

Bob, you've hit upon an enlightening point.

Your post about alums' vote for "Best" teacher or "Favorite" teacher not being the one who "taught them the most", or even "the most entertaining" contrasts vividly with my wording when I refer to "Excellence in teaching".

The "Best" teacher in a student's (alums) eye isn't always the "most excellent" teacher.

Most of us (the public at large, even) desire to be popular, and therefore "best" or "favorite" in anything we do. But "best" and "favorite" are far more subjective and individual-dependent superlatives than "most excellent".

The latter term denotes a high level of attaining the objective of the endeavor, whereas the former terms denote a broader array of attributes (frequently skewed more towards personality traits) appealing to personal tastes, where the overriding attributes do not have to be the meeting of the fundamental objectives of 'teaching'.

I (and many of my colleagues, possibly including yourself) generally strive for excellence in teaching, -- which often requires excellence in many other attributes (including personality ones, too!) in order to achieve. Unfortunatly, many students concentrate their attention on the personality- related ones. And just as sadly, the AACSB accreditation jokers (along with elected state legislators at the K-12 level!) concentrate only on "knowledge transfer", "comprehension", and "measurabale rubrics". Both of these extremes ignore the overall mix which composes "Excellence in teaching" in terms of achieving the educational objectives.

(And yes, I strongly believe that educational objectives include far more than mere knowledge transfer... they include motivation, inspiration, appreciation, and many other currently-*unmeasurable* traits, which is why I'm such an outspoken critic of the AACSB's "assurance of learning" shenanigans.)

By the way, if you've read this far: Bob, I've got to admit my poor choice of wording on an earlier post. I indicated that some fields (such as pharmacology, genetics, etc.) require "research" to teach well -- I didn't mean to equate research with publication as is commonly done in academe, nor did I mean to equate it with "advancing the knowledge of mankind" as it is probably more accurately defined. I meant that those fields require effort to stay on top of what's happening, as you more appropriately and accurately articulated. This can take the form of overt activity to advance the knowledge of mankind, or it can take the form of studious and constant attention to current literature and activity of others. (I guess that's what I get for becoming so immersed in my genealogical "research", which for the most part consists of studiously searching and absorbing the "literature" and activities of others, rather than creation on my own!) Anyway, I'd also like to agree strongly with your assertion that "excellent teachers come in all varieties". This is another fact which further confounds the "measurement" of excellent teaching, and is often ignored by those in the AACSB and state legislature education committees.

David Fordham 
James Madison University

January 2, 2005 reply from David Albrecht [albrecht@PROFALBRECHT.COM]

David Fordham,

Another intriguing post to the always interesting posts of Bob Jensen.

This has caused my mind to wander, and it has stalled wondering about the similarities between audit quality and teaching quality.

As I recall, there are three primary components or perspectives of audit quality: the input of the auditor (new being made public for the first time by the PCAOB), the accuracy of the auditor's output, and the public perception of the auditor. Based on the maxim that perception is reality, much musing and academic research has focused on the third component. Perhaps an example will help explain what I'm getting at. For decades, the firm of Arthur Andersen worked hard on the first two components and eventually was bestowed with the third component. Then, according to Toffler, Squires et. al, Brewster and some others, AA skimped on the first two components and eventually had the third component withdrawn. Have the final biggest firms, the Big Four, traveled the same path? I can't really tell, given the confounding that the big firms insurance function brings to the analysis. I do know that for the largest companies (audited by the biggest auditing firms) the large amount of restatements causes me to doubt the amount of recent auditor quailty.

In a fashion, there seem to be three similar components of excellence in teaching. First, there is the input of the teacher. There are many parts to this. There is the scholarly endeavor of "keeping up." There is the creative thought that goes into course design and material development. Of course, there is the preparation for each class, and there is the classroom pedagogy. The second component would have to be the amount and quality of learning that takes place. The third component would be the public perception of the teacher.

With respect to the national public, it is easy to see that many students engage the teachers from the most expensive, elite schools. These students seem willing to pay the price needed to get that clean opinion from the top firm, er, I mean that degree with honors from the top school. Are these students acting in the most rational manner? It's hard to tell. They seem to go to top research schools where they receive much of their instruction from graduate students, many of whom lack American language and cultural skills that I'd think necessary for much quality. Then, they get to senior level classes and receive instruction from professors that sometimes are too preoccupied with research to adequately shepherd their students. The elite schools try not to mess up the good students too much. The students find assurance in the perceived quality of the degree from the elite school

Some students, frequently the less well heeled or from the poorest educated families, attend lower ranked schools. Dare I say a teaching school such as Bowling Green or James Madison? Anecdotal evidence supports the contention that my school places much emphasis on the first two components of teaching quality and does a quality job. However, not being one of the biggest schools does put a hurt on the perceived quality of the educational experience here.

I wonder if the PCAOB, the auditor's auditor, will be any better than the AACSB, the business and accounting program's auditor. I can tell from experience that a non-elite accounting program has a difference of opinion with the AACSB, not because its students come from around the world (they do) or that its graduates are in high demand by national and regional employers (they are) or that its graduates progress rapidly in their careers (they do), but because of an insufficient number of faculty publications in top-tier journals. I think some of the time t he AACSB misses the boat.

Will the PCAOB? I guess that will be the true test of the similarity between auditor quality and teacher quality.

David Albrecht 
Bowling Green State University

See also Grade Inflation Versus Teaching Evaluations


Student Evaluations and Learning Styles

There is an enormous problem of assuming that students who wrote high evaluations of any course actually learned more than high performing students who hated the course.  Happiness and learning are two different things.

Reasons why students often prefer online courses may have little or nothing to do with actual learning.  At the University of North Texas where students can sometimes choose between an onsite or an online section of a course, some students just preferred to be able to take a course in their pajamas --- http://www.trinity.edu/rjensen/255wp.htm#NorthTexas 
Some off-campus students prefer to avoid the hassle and time consumed driving to campus and spending a huge amount of time searching for parking.  Some Mexico City students claim that they can save over five hours a day in commuting time, which is time made free for studying (Jim Parnell, Texas A&M, in partnership with Monterrey Tech, deliver an ALN Web MBA Program in Mexico City) --- http://www.trinity.edu/rjensen/000aaa/0000start.htm 

In general, comparisons of onsite versus online test and grade performance will tend to show "no differences" among good students, because good students learn the material under varying circumstances.  Differences are more noteworthy weaker students or students who tend to drop courses, but there is a huge instructor effect that is difficult to factor out of such studies. For more on this, go to http://www.trinity.edu/rjensen/assess.htm 

Online Learning Styles

Here are a few links of possible interest with regard to student evaluations and online learning styles.  In some cases you may have to contact to presenters to get copies of their papers.

Probably the best place to start is with the Journal of Asynchronous Learning --- http://www.sloan-c.org/publications/jaln/index.asp

For example, one of the archived articles is entitled “"Identifying Student Attitudes and Learning Styles in Distance Education" in the September 2001 edition --- http://www.sloan-c.org/publications/jaln/v5n2/v5n2_valenta.asp

Three opinion types were identified in this study: Students who identified with issues of Time and Structure in Learning, Social Interaction in Learning, and Convenience in Learning. These opinions can be used to aid educators in reaching their students and increasing the effectiveness of their online courses. At UIC, this insight had direct application to the evolution of course materials. Early application of technology merely supplied a web site on which were posted syllabus, readings and assignments. No opportunity existed for conferencing; thus, there existed no opportunity for social learning. In a subsequent semester, conferencing software was made available to the class, in addition to the website. Thus, the opportunity was added for social learning. The faculty learned, however, that every time a new technology was added, it experienced an increase in the level of effort necessary to support the student. Ultimately, the University made available a course management system, which significantly streamlined the effort on the part of faculty to make course materials available to the student. The system provides through a single URL the student's access to course materials, discussion forums, virtual groups and chat, testing, grades, and electronic communication.

This study is qualitative and confined to University of Illinois at Chicago graduate and undergraduate students. The three opinion types identified through this study, however, correlate closely with results reported in the literature. All three groups of students, representing the three opinion types, shared a belief in the importance of being able to work at home. The studies of Richards and Ridley [9] and Hiltz [10] described flexibility and convenience as both reasons students enrolled in online courses and as the perception of students once enrolled. On the other hand, all three groups of students thought unimportant the need to pay home phone bills incurred in online education, whereas Bee [13] found that students felt the university should provide financial assistance to offset the associated costs of going online. There is evidence in the literature (viz., studies by Guernsey [8] and Larson [18]) that support the opinion identified in this study of the need by some students for face-to-face interaction. Since none of the students taking the Q-sort had ever taken an online course, they were unaware of the opportunities provided by technology [8,10] to potentially increase individual attention from instructors above that normal in face-to-face course offerings. Since no post-enrollment Q-sorts were administered, there was no way to tell whether students continued to hold that opinion, or whether that opinion has changed. It is anticipated that even if the Q-set were administered to a larger number of students, similar viewpoints would still emerge.

The authors wondered whether there was an association between the opinion set held by the student and his or her learning style. Preliminary data using the Canfield Learning Styles Inventory [27] show that the factor one group--Time and Structure in Learning--exhibited a much higher than expected proportion of independent learners. (74% of the students who had high factor loadings on factor one were also classified as independent learners. This difference was significant Z = 3.00, p < .025.) One might be tempted to hypothesize a relationship between being an independent learner and having the time and structure opinion of technology and education. Similarly, one might also expect that individuals who had high factor loadings for factor two (Social Factors in Learning) would be more likely classified as social learners. Further research is necessary to understand how learning styles contribute to the experience of online education.

There is a movement in both education and business to harness the power of the World Wide Web to disseminate information. Educators and researchers, aware of this technological paradigm shift, must become invested in understanding the interactions of students and computing. The field of human-computer interface design, as applied to interaction of students in online courses, is ripe for research in the area of building better virtual learning communities (thus addressing the needs of the social learner) without overwhelming the ability of the independent learner to excel on his or her own.

 


Learning and Teaching Styles (Australia) --- http://library.trinity.wa.edu.au/teaching/styles.htm 

Online Learning Styles --- http://www.metamath.com/lsweb/dvclearn.htm  

Adapting a Course to Different Learning Styles --- http://www.glue.umd.edu/~jpaol/ASA/ 

FasTrak Consulting --- http://www.fastrak-consulting.co.uk/tactix/features/lngstyle/style04.htm 

VARK Questionnaire --- http://www.vark-learn.com/english/page.asp?p=questionnaire 

Selected professors  ---  http://online.sfsu.edu/~bjblecha/cai/cais00.htm

 JCU Study Skills --- http://www.jcu.edu.au/studying/services/studyskills/learningst/

Cross-Cultural Considerations --- http://www.trinity.edu/rjensen/cultures/culture.htm 

"How Do People Learn," Sloan-C Review, February 2004 --- 
http://www.aln.org/publications/view/v3n2/coverv3n2.htm 

Like some of the other well known cognitive and affective taxonomies, the Kolb figure illustrates a range of interrelated learning activities and styles beneficial to novices and experts. Designed to emphasize reflection on learners’ experiences, and progressive conceptualization and active experimentation, this kind of environment is congruent with the aim of lifelong learning. Randy Garrison points out that:

From a content perspective, the key is not to inundate students with information. The first responsibility of the teacher or content expert is to identify the central idea and have students reflect upon and share their conceptions. Students need to be hooked on a big idea if learners are to be motivated to be reflective and self-directed in constructing meaning. Inundating learners with information is discouraging and is not consistent with higher order learning . . . Inappropriate assessment and excessive information will seriously undermine reflection and the effectiveness of asynchronous learning. 

Reflection on a big question is amplified when it enters collaborative inquiry, as multiple styles and approaches interact to respond to the challenge and create solutions. In How People Learn: Brain, Mind, Experience, and School, John Bransford and colleagues describe a legacy cycle for collaborative inquiry, depicted in a figure by Vanderbilt University researchers  (see image, lower left).

Continued in the article

Bob Jensen has some related (oft neglected) comments about learning at http://www.trinity.edu/rjensen/265wp.htm

You can read more about online and asynchronous learning at http://www.trinity.edu/rjensen/255wp.htm 

 

Assessment Takes Center Stage in Online Learning:  
The Saga of Western Governors University

Western Governors University was formed by the Governors of 11 Western states in the United States and was later joined by Indiana and Simon Fraser University in Canada.  WGU attempted several business models, including attempts to broker courses from leading state universities and community colleges as well as a partnership with the North American branch of U.K.'s Open University.  All business models to date have been disappointments and online enrollments are almost negligible to date.  WGU has nevertheless survived to date with tax-dollar funding from the founding states.  The WGU homepage is at http://www.wgu.edu/wgu/index.html 

One unique aspect of WGU is its dedication to competency-based assessment (administered to date by Slvan Systems).  An important article on this is entitled "Assessment Takes Center Stage in Online Learning:  Distance educators see the need to prove that they teach effectively," by Dan Carnevale, The Chronicle of Higher Education, April 13, 2001 --- http://www.chronicle.com/free/v47/i31/31a04301.htm 

Students at Western Governors University aren't required to take any courses. To earn a degree, they must pass a series of assessment exams. The faculty members don't teach anything, at least not in the traditional sense. Instead, they serve as mentors, figuring out what students already know and what courses they need to take to pass the exams.

Assessment also plays a big role at the University of Phoenix Online. In a system modeled after the university's highly successful classroom offerings, students are grouped together in courses throughout an entire degree program, and they are given batteries of exams both before and after the program. The tests enable the university to measure exactly how much the students have learned, and to evaluate the courses.

Indeed, assessment is taking center stage as online educators experiment with new ways of teaching and proving that they're teaching effectively.

And traditional institutions, some observers say, should start taking notes.

Education researchers caution that distance educators are still in the process of proving that they can accurately assess anything, and that comparatively few distance-education programs are actually participating in the development of new testing strategies.

One difference between assessment in classrooms and in distance education is that distance-education programs are largely geared toward students who are already in the workforce, which often involves learning by doing. In many of the programs, students complete projects to show they not only understand what they've learned but also can apply it -- a focus of many assessment policies.

In addition to such projects, standardized tests are a key part of assessments in distance education. These tests are usually administered online in proctored environments, such as in a student's hometown community college.

Western Governors and the University of Phoenix Online are among the most visible institutions creating assessment methods, but they are not alone. Many other distance-education programs use some form of outcomes-based assessment tests, including Excelsior College (formerly Regents College), in Albany, N.Y.; Pennsylvania State University's World Campus; Thomas Edison State College, in Trenton, N.J.; the State University of New York's Empire State College; and University of Maryland University College.

All of higher education is moving toward outcomes-based assessments, with online education leading the way, says Peter Ewell, senior associate at the National Center for Higher Education Management Systems. The push for new assessment models in online education comes largely from competition with its older brother, traditional education, says Mr. Ewell. Because distance education is comparatively new, he says, critics often hold it to a higher standard than traditional education when judging quality. It has more to prove, and is trying to use assessments that show its effectiveness as the proof.

Online education is only one of several influences putting pressure on traditional education to do more to assess the quality of courses. Accreditation agencies, state governments, and policy boards are all heading toward an inevitable question, Mr. Ewell says: How much bang for the buck is higher education putting out?

But Perry Robinson, deputy director of higher education at the American Federation of Teachers, says assessment exams shift the emphasis away from what he considers the most important element of learning: student interaction with professors in a classroom.

The federation has been critical of distance learning in the past, saying an undergraduate degree should always include a face-to-face component. Mr. Perry says having degrees that rely on students' passing tests reduces higher education to nothing more than job training.

Also, Mr. Perry doesn't want to see the role of the professor diminished, because that person knows the material the best and works with the students day after day. "Assessment is involved in the classroom when you engage the students and see the look of befuddlement on their faces," he says.

But Peggy L. Maki, director of assessment at the American Association for Higher Education believes that all of higher education will move toward a system of assessing outcomes for students. Although distance education is contributing to this movement, it isn't the biggest factor, she says. "We're talking about a cultural change."

Some of this change is prompted by the demands of legislators and other policy makers, Ms. Maki says. Also, institutions are feeling pressure from peers to create outcomes-assessment models. "I think there have been more challenges with people saying, 'Can you really do this?'" she says. "When they do, others say, 'Well, we better follow suit.'"

But traditional and distance-education institutions alike are struggling to figure out how to use the the results of assessment examinations to create programs and even budgets. "This is the hardest part of the assessment process -- how you use the results," Ms. Maki says.

Western Governors University's assessment system is intended to measure the students' competency in specific subjects. Because it doesn't matter to W.G.U. whether the students learned the material on their own or from courses they've taken through the university, the entire degree revolves around the assessment tests.

The university doesn't create its own courses. Instead, it forms partnerships with other universities around the country that have created online courses in various subjects. A student seeking a degree must show competency in a number of "domains." These include general education, such as writing and mathematics, and domains specific to the subject, such as business management.

Western Governors officials create some of their own assessment examinations and buy some from other organizations, such as the ACT and the Educational Testing Service.

For W.G.U.'s own exams, experts from the professional and academic arenas collaborate to determine what students need to demonstrate to prove they are competent in a field. Unlike traditional colleges, Western Governors separates assessment from learning. The professors who grade the assessment exams have not had any prior interaction with the student.

For the rest of the article, go to http://www.chronicle.com/free/v47/i31/31a04301.htm 

Update Message from Syllabus News on February 5, 2002

Western Governors University Meeting Access Goals

The Western Governors University released its annual report, which said the private, non-profit university, founded by 19 western governors, is achieving its goals to expand access to higher education, especially for working adults. WGU President Bob Mendenhall said, "the constraints on time due to work and family commitments are access issues ... so the flexibility provided by WGU's online, competency-based model is very appealing to a broad spectrum of students." WGU currently has about 2,500 students enrolled, up from 500 students one year ago. The average WGU student is 40 years old, and over 90 percent work full-time.

For more information, visit:  http://www.wgu.edu 

ALSO SEE:

Three sample assessment questions from Western Governors University in the area of quantitative reasoning, and the answers.


From PublicationsShare.com --- http://publicationshare.com/ 

Free Downloadable Reports from CourseShare:

Bob Jensen's threads on education technologies are at http://www.trinity.edu/rjensen/000aaa/0000start.htm 


 

From Distance Education and Its Challenges: An Overview, by D.G. Oblinger, C.A. Barone, and B.L. Hawkins (ACE, American Council on Education Center for Policy Analysis and Educause, 2001, pp. 39-40.) http://www.acenet.edu/bookstore/pdf/distributed-learning/distributed-learning-01.pdf
Appendix 4
Measures of Quality in Internet-Based Distance Learning

With the worldwide growth of distributed learning, attention is being paid to the nature and quality of online higher education.  Twenty-four bench marks were identified in a study conducted by the Institute for Higher Education Policy.  To formulate the benchmarks, the report identified firsthand, practical strategies being used by U.S. colleges and universities considered to be leaders in online distributed learning.  The benchmarks were divided into seven categories of quality measures.

Institutional Support Benchmarks

1.A documented technology plan includes electronic security measures to ensure both quality standards and the integrity and validity of information.

2.The reliability of the technology delivery system is as close to failsafe as possible.

3.A centralized system provides support for building and maintaining the distance education infrastructure.

Course Development Benchmarks

4.Guidelines regarding minimum standards are used for course development, design, and delivery, while learning outcomes —not the availability of existing technology — determine the technology being used to deliver course content.

5.Instructional materials are reviewed periodically to ensure that they meet program standards.

6.Courses are designed to require students to engage themselves in analysis, synthesis, and evaluation as part of their course and program requirements.

Teaching/Learning Benchmarks

7.Student interaction with faculty and other students is essential and is facilitated through a variety of ways, including voice mail and/or email.

8.Feedback to student assignments and questions is constructive and provided in a timely manner.

9.Students are instructed in the proper methods of effective research, including assessment of the validity of resources.

Course Structure Benchmarks

10.Before starting an online program, students are advised about the program to determine if they possess the self motivation and commitment to learn at a distance and if they have access to the minimal technology required by the course design.

11.Students are provided with supplemental information that outlines course objectives, concepts, and ideas, and learning outcomes for each course are summarized in a clearly written, straightforward statement.

12.Students have access to sufficient library resources that may include a “virtual library ”accessible through the web.

13. Faculty and students agree on an accept- able length of time for student assignment completion and faculty response.

Student Support Benchmarks

14.Students receive information about programs including admission requirements, tuition and fees, books and supplies ,technical and proctoring requirements, and student support services.

15.Students are provided with hands-on training and information to aid them in securing material through electronic databases, inter-library loans, government archives, news services, and other sources.

16.Throughout the duration of the course/program, students have access to technical assistance, including detailed instructions regarding the electronic media used, practice sessions prior to the beginning of the course, and convenient access to technical support staff.

17.Questions directed to student service personnel are answered accurately and quickly, with a structured system in place to address student complaints.

Faculty Support Benchmarks

18.Technical assistance in course development is available to faculty, who are encouraged to use it.

19.Faculty members are assisted in the transition from classroom teaching to online instruction and are assessed during the process.

20.Instructor training and assistance,including peer mentoring, continues through the progression of the online course.

21.Faculty members are provided with written resources to deal with issues arising from student use of electronically accessed data.

Evaluation and Assessment Benchmarks

22.The program ’s educational effectiveness and teaching/learning process is assessed through an evaluation process that uses sev- eral methods and applies specific standards.

23.Data on enrollment,costs,and successful/innovative uses of technology are used to evaluate program effectiveness.

24.Intended learning outcomes are regularly reviewed to ensure clarity,utility,and appropriateness.

 


Reporting Assessment Data is No Big Deal for For-Profit Learning Institutions

"What Took You So Long?" by Doug Lederman, Inside Higher Ed, June 15, 2007 --- http://www.insidehighered.com/news/2007/06/15/cca

You’d have been hard pressed to attend a major higher education conference over the last year where the work of the Secretary of Education’s Commission on the Future of Higher Education and the U.S. Education Department’s efforts to carry it out were not discussed. And they were rarely mentioned in the politest of terms, with faculty members, private college presidents, and others often bemoaning proposals aimed at ensuring that colleges better measure the learning outcomes of their students and that they do so in more readily comparable ways.

The annual meeting of the Career College Association, which represents 1,400 mostly for-profit and career-oriented colleges, featured its own panel session Thursday on Education Secretary Margaret Spellings’ various “higher education initiatives,” and it had a very different feel from comparable discussions at meetings of public and private nonprofit colleges. The basic theme of the panelists and the for-profit college leaders in the audience at the New Orleans meeting was: “What’s the big deal? The government’s been holding us accountable for years. Deal with it.”

Ronald S. Blumenthal, vice president for operations and senior vice president for administration at Kaplan Higher Education, who moderated the panel, noted that the department’s push for some greater standardization of how colleges measure the learning and outcomes of their students is old hat for institutions that are accredited by “national” rather than “regional” accreditors, as most for-profit colleges are. For nearly 15 years, ever since the Higher Education Act was renewed in 1992, national accreditors have required institutions to report placement rates and other data, and institutions that perform poorly compared to their peers risk losing accreditation.

“These are patterns that we’ve been used to for more than 10 years,” said Blumenthal, who participated on the Education Department negotiating panel that considered possible changes this spring in federal rules governing accreditation. “But the more traditional schools have not done anything like that, and they don’t want to. They say it’s too much work, and they don’t have the infrastructure. We had to implement it, and we did did implement it. So what if it’s more work?,” he said, to nods from many in the audience.

Geri S. Malandra of the University of Texas System, another member of the accreditation negotiating team and a close adviser to Charles Miller, who headed the Spellings Commission and still counsels department leaders, said that nonprofit college officials (and the news media, she suggested) often mischaracterized the objectives of the commission and department officials as excessive standardization.

“Nobody was ever saying, there is one graduation rate for everyone regardless of the program,” Malandra said. “You figure out for your sector what makes sense as the baseline. No matter how that’s explained, and by whom, the education secretary or me, it still gets heard as one-size-fits-all, a single number, a ‘bright line’ ” standard. “I don’t think it was ever intended that way.”

The third panelist, Richard Garrett, a senior analyst at Eduventures, an education research and consulting company, said the lack of standardized outcomes measures in higher education “can definitely be a problem” in terms of gauging which institutions are actually performing well. “It’s easy to accuse all parts of higher education of having gone too far down the road of diversity” of missions and measures, Garrett said.

“On the other hand,” said Garrett, noting that American colleges have long been the envy of the world, “U.S. higher education isn’t the way it is because of standardization. It is as successful as it is because of diversity and choice and letting a thousand flowers bloom,” he said, offering a voice of caution that sounded a lot like what one might have heard at a meeting of the National Association of Independent Colleges and Universities or the American Federation of Teachers.


December 10, 2004 message from Carolyn Kotlas [kotlas@email.unc.edu

E-LEARNING ONLINE PRESENTATIONS

The University of Calgary Continuing Education sponsors Best Practices in E-Learning, a website that provides a forum for anyone working in the field to share their best practices. This month's presentations include:

-- "To Share or Not To Share: There is No Question" by Rosina Smith Details a new model for permitting "the reuse, multipurposing, and repurposing of existing content"

-- "Effective Management of Distributed Online Educational Content" by Gary Woodill "[R]eviews the history of online educational content, and argues that the future is in distributed content learning management systems that can handle a wide diversity of content types . . . identifies 40 different genres of online educational content (with links to examples)"

Presentations are in various formats, including Flash, PDF, HTML, and PowerPoint slides. Registered users can interact with the presenters and post to various discussion forums on the website. There is no charge to register and view presentations. You can also subscribe to their newsletter which announces new presentations each month. (Note: No archive of past months' presentations appears to be on the website.)

For more information, contact: Rod Corbett, University of Calgary Continuing Education; tel:403-220-6199 or 866-220-4992 (toll-free); email: rod.corbett@ucalgary.ca ; Web: http://elearn.ucalgary.ca/showcase/


NEW APPROACHES TO EVALUATING ONLINE LEARNING

"The clear implication is that online learning is not good enough and needs to prove its worth before gaining full acceptance in the pantheon of educational practices. This comparative frame of reference is specious and irrelevant on several counts . . ." In "Escaping the Comparison Trap: Evaluating Online Learning on Its Own Terms (INNOVATE, vol. 1, issue 2, December 2004/January 2005), John Sener writes that, rather than being inferior to classroom instruction, "[m]any online learning practices have demonstrated superior results or provided access to learning experiences not previously possible." He describes new evaluation models that are being used to judge online learning on its own merits. The paper is available online at http://www.innovateonline.info/index.php?view=article&id=11&action=article.

You will need to register on the Innovate website to access the paper; there is no charge for registration and access.

Innovate [ISSN 1552-3233] is a bimonthly, peer-reviewed online periodical published by the Fischler School of Education and Human Services at Nova Southeastern University. The journal focuses on the creative use of information technology (IT) to enhance educational processes in academic, commercial, and government settings. Readers can comment on articles, share material with colleagues and friends, and participate in open forums. For more information, contact James L. Morrison, Editor-in-Chief, Innovate; email: innovate@nova.edu ; Web: http://www.innovateonline.info/.

 


You might find some helpful information in the following reference --- http://202.167.121.158/ebooks/distedir/bestkudo.htm 

Phillips, V., & Yager, C. The best distance learning graduate schools: Earning your degree without leaving home.
This book profiles 195 accredited institutions that offer graduate degrees via distance learning. Topics include: graduate study, the quality and benefits of distance education, admission procedures and criteria, available education delivery systems, as well as accreditation, financial aid, and school policies.

A review is given at http://distancelearn.about.com/library/weekly/aa022299.htm 

Some good assessment advice is given at http://www.ala.org/acrl/paperhtm/d30.html 

A rather neat PowerPoint show from Brazil is provided at http://www.terena.nl/tnc2000/proceedings/1B/1b2.ppt  
(Click on the slides to move forward.)

The following references may be helpful in terms of evaluation forms:

  1. Faculty Course Evaluation Form
    University of Bridgeport
  2. Web-Based Course Evaluation Form
    Nashville State Technology Institute
  3. Guide to Evaluation for Distance Educators
    University of Idaho Engineering Outreach Program
  4. Evaluation in Distance Learning: Course Evaluation
    World Bank Global Distance EducatioNet

A Code of Assessment Practice is given at http://cwis.livjm.ac.uk/umf/vol5/ch1.htm 

A comprehensive outcomes assessment report (for the University of Colorado) is given at http://www.colorado.edu/pba/outcomes/ 

A Distance Learning Bibliography is available at http://mason.gmu.edu/~montecin/disedbiblio.htm 

Also see "Integration of Information Resources into Distance Learning Programs"  by Sharon M. Edge and Denzil Edge at http://www.learninghouse.com/pubs_pubs02.htm 


"A New Methodology for Evaluation: The Pedagogical Rating of Online Courses," by Nishikant Sonwalkar, Syllabus Magazine, January 2002, 18-21 --- http://www.syllabus.com/syllabusmagazine/article.asp?id=5914 

This article proposes a means of numerically evaluating various attributes of an online course and then aggregating these into an "Overall Rating."  Obviously, any model for this type of aggregation will be highly controversial since there are so many subjective criteria and so many interactive (nonlinear) complexities that lead us to doubt and additive aggregation.

The author follows up on two previous articles in Syllabus Magazine (November and December 2001) a pedagogical learning cube.  This January 2002 article takes a giant leap by aggregating metrics of six media types, five learning styles, and five types of student interactions (not to be confused with the model's component interactions).  The pedagogy effectiveness index expressed as a summative rule

I have all sorts complaints about an additive summation index of components that are hardly independent.  However, I will leave it to the reader to read this article and form his or her own opinion.


Number Watch:  How to Lie With Statistics

Number Watch
This is a link that every professor should look at very, very seriously and (sigh) skeptically!


Number Watch is a truly fascinating site --- http://www.numberwatch.co.uk/number%20watch.htm 

This site is devoted to the monitoring of the misleading numbers that rain down on us via the media. Whether they are generated by Single Issue Fanatics (SIFs), politicians, bureaucrats, quasi-scientists (junk, pseudo- or just bad), such numbers swamp the media, generating unnecessary alarm and panic. They are seized upon by media, hungry for eye-catching stories. There is a growing band of people whose livelihoods depend on creating and maintaining panic. There are also some who are trying to keep numbers away from your notice and others who hope that you will not make comparisons. Their stock in trade is the gratuitous lie. The aim here is to nail just a few of them.

Number of the month
Book reviews
Links
FAQs (Jensen Comment:  Especially note the FAQ on averaging)
Contact Information
Comments and Suggestions
Sorry, wrong number! The first book of the web site  
The epidemiologists The NEW book of the web site
Bits and pieces
Guest papers
Home page

The Scout Report on February 11, 2005 has this to say:

John Brignell, Professor Emeritus from the Department of Electronics & Computer Science at the University of Southampton, is the author of this informal website "devoted to the monitoring of the misleading numbers that rain down on us via the media." Brignell says he aims to "nail" a few of the "Single Issue Fanatics (SIFs), politicians, bureaucrats, quasi-scientists (junk, pseudo- or just bad)," who use misleading numbers to write catchy articles or who try to keep numbers away from public notice. Since April 2000, he has been posting a "number of the month" as well as a "number for the year," which offer his commentary on media usage of misleading numbers and explanations for why the numbers are misleading. He also posts book reviews and an extensive list of online resources on statistics and statistics education. The FAQ section includes answers to some interesting questions, such as "Is there such a thing as average global temperature?" and some more basic questions such as "What is the Normal Distribution and what is so normal about it?" The Bits and Pieces section includes a variety of short articles on statistics and his definitions for some terms he uses on the website. Visitors are also invited to join the discussion forum (complete with a few advertisements) and view comments by others who want to discuss "wrong numbers in science, politics and the media." A few comments sent to Brignell and his responses are also posted online. This site is also reviewed in the February 11, 2005_NSDL MET Report. 

Jensen Comment:

     I'm getting some feedback from respected scientists that the site has good rules but then breaks its own rules when 
    applying the rules.

    I focused more on the rules themselves and found the site interesting.

    One that I liked were the statistics pages such as the one at http://www.numberwatch.co.uk/averages.htm

    Alas!  Even our statisticians with good rules lie with statistics.  I guess that alone makes this site interesting from an
     educational standpoint.

    Bob


Myanmar's improbable tsunami statistics and the casualty numbers game.
Kerry Howley, "Disaster Math," ReasonOnline, January 7, 2005 --- http://www.reason.com/links/links010705.shtml 


 

Drop Out Problems

READINGS ON ONLINE COURSE DROP-OUTS

"Do Online Course Drop-Out Rates Matter?" presented articles on this topic (CIT INFOBITS, Issue 46, April 2002, http://www.unc.edu/cit/infobits/bitapr02.html#3 ). Additional readings include:

"Confessions of an E-Learner: Why the Course Paradigm is All Wrong," by Eve Drinis and Amy Corrigan, ONLINELEARNING MAGAZINE, April 3, 2002. http://www.onlinelearningmag.com/onlinelearning/reports_analysis/feature_display.jsp?vnu_content_id=1457218 

OnlineLearning Magazine: Innovative Strategies for Business [ISSN: 1532-0022] is published eleven times a year by VNU Business Media, Inc., 50 S. Ninth Street, Minneapolis, MN 55402 USA; tel: 612-333-0471; fax: 612-333-6526; email: editor@onlinelearningmag.com; Web: http://www.onlinelearningmag.com/

"Five Steps For Ensuring E-Learning Success," by Pete Weaver, American Society for Training & Development (ASTD) website. http://66.89.55.104/synergy/emailmgmt/moreinfo/moreinfo.cfm?member_id=138902&sponsor_id=367&content_id=1293&b1=194&b2=192&b3=192 

American Society for Training & Development (ASTD) is a professional association concerned with workplace learning and performance issues. For more information, contact ASTD, 1640 King Street, Box 1443, Alexandria, VA 22313-2043 USA; tel: 703-683-8100 or 800-628-2783; fax: 703-683-1523; Web: http://www.astd.org/ 

Question
Who will stick it out and who will drop out of a distance education course?

Answer
See http://www.usdla.org/html/journal/JAN03_Issue/article06.html  (Includes a Literature Review)

Hypotheses

This study had two hypotheses:

  1. Locus of control, as measured by the Rotter's Locus of Control scale, is a significant predictor of academic persistence.

  2. Locus of control scores increase, moved toward internality, over the course of a semester for students enrolled in web-based instruction.

 

 


Accreditation Issues

Bob Jensen's threads on higher education controversies are at http://www.trinity.edu/rjensen/HigherEdControversies.htm

Accreditation: Why We Must Change
Accreditation has been high on the agenda of the Secretary of Education’s Commission on the Future of Higher Education and not in very flattering ways. In “issue papers” and in-person discussions, members of the commission and others have offered many criticisms of current accreditation practice and expressed little faith or trust in accreditation as a viable force for quality for the future.
Judith S. Eaton, "Accreditation: Why We Must Change," Inside Higher Ed, June 1, 2006 --- http://www.insidehighered.com/views/2006/06/01/eaton

A Test of Leadership: Charting the Future of U.S. Higher Education
Charles Miller, chairman of the Secretary of Education’s Commission on the Future of Higher Education, delivered the final version of the panel’s report to the secretary herself, Margaret Spellings, on Tuesday. The report, “A Test of Leadership: Charting the Future of U.S. Higher Education,” is little changed from the final draft that the commission’s members approved by an 18 to 1 vote last month. Apart from a controversial change in language that softened the panel’s support for open source software, the only other alterations were the addition of charts and several “best practices” case studies, which examine the California State University system’s campaign to reach out to underserved students in their communities, the National Center for Academic Transformation’s efforts to improve the efficiency of teaching and learning, and the innovative curriculum at Neumont University (yes, Neumont University), a for-profit institution in Salt Lake City. Spellings said in a statement that she looks forward to “announcing my plans for the future of higher education” next Tuesday at a previously announced luncheon at the National Press Club in Washington.
Inside Higher Ed, September 20, 2006 --- http://www.insidehighered.com/news/2006/09/20/qt
"Assessing Learning Outcomes," by Elia Powers, Inside Higher Ed, September 21, 2006 --- http://www.insidehighered.com/news/2006/09/21/outcomes

“There is inadequate transparency and accountability for measuring institutional performance, which is more and more necessary to maintaining public trust in higher education.“

“Too many decisions about higher education — from those made by policymakers to those made by students and families — rely heavily on reputation and rankings derived to a large extent from inputs such as financial resources rather than outcomes.”

Those are the words of the Secretary of Education’s Commission on the Future of Higher Education, which on Tuesday handed over its final report to Secretary Margaret Spellings.

Less than a week before Spellings announces her plans to carry out the commission’s report, a panel of higher education experts met in Washington on Wednesday to discuss how colleges and universities report their learning outcomes now and the reasons why the public often misses out on this information. On this subject, the panelists’ comments fell largely in line with those of the federal commission.

The session, hosted by the Hechinger Institute on Education and the Media, at Columbia University’s Teachers College, included an assessment of U.S. News & World Report’s annual college rankings, which critics say provide too little information about where students learn best.

“The game isn’t about rankings and who’s No. 1,” said W. Robert Connor, president of the Teagle Foundation, a group that has sponsored a series of grants in “value added assessment,” intended to measure what students learn in college. Connor said colleges should be graded on a pass/fail basis, based on whether they keep track of learning outcomes and if they tell the public how they are doing.

“We don’t need a matrix of facets summed up in a single score,” added David Shulenburger, vice president of academic affairs for the National Association of State Universities and Land-Grant Colleges.

What students, parents, college counselors and legislators need is a variety of measuring sticks, panelists said. Still, none of the speakers recommended that colleges refuse to participate in the magazine’s rankings, or that the rankings go away.

“It’s fine that they are out there,” said Richard Ekman, president of the Council on Independent Colleges. “Even if it’s flawed, it’s one measure.”

Ekman said the Collegiate Learning Assessment, which measures educational gains made from a student’s freshman to senior year, and the National Survey of Student Engagement, which gauges student satisfaction on particular campuses, are all part of the full story. (Many institutions participate in the student engagement survey, but relatively few of them make their scores public.) Ekman said there’s no use in waiting until the “perfect” assessment measure is identified to start using what’s already available.

Still, Ekman said he is “wary about making anything mandatory,” and doesn’t support any government involvement in this area. He added that only a small percentage of his constituents use the CLA. (Some are hesitant because of the price, he said.)

Shulenburger plugged a yet-to-be completed index of a college’s performance, called the Voluntary System of Accountability, that will compile information including price, living arrangements, graduation rates and curriculums.

Ross Miller of the Association of American Colleges & Universities said he would like to see an organization compile a list of questions that parents and students can ask themselves when searching for a college. He said this would serve consumers better than even the most comprehensive ranking system.

The Spellings commission recommended the creation of an information database and a search engine that would allow students and policymakers to weigh comparative institutional performance.

Miller also said he would like to see more academic departments publish on their Web sites examples of student work so that applicants can gauge the nature and quality of the work they would be doing.

Bob Jensen's threads on higher education controversies are at http://www.trinity.edu/rjensen/HigherEdControversies.htm

 


Colleges On the Far, Far Left Are Having a Difficult Time With Finances and Accreditation

"Turmoil at Another Progressive College," by Elizabeth Redden, Inside Higher Ed, August 1, 2007 --- http://www.insidehighered.com/news/2007/08/01/newcollege

New College of California, which, according to its president, depends on tuition for 95 percent of its budget, finds itself at this crossroads as the closure of Antioch College’s main undergraduate institution focuses attention on the particular vulnerability of progressive colleges, which tend to feature small enrollments, individualized instruction and a commitment to producing alumni engaged in socially responsible, if not fiscally rewarding, careers. With a historic focus on non-traditional education, New College’s graduate and undergraduate program offerings today include women’s spirituality, teacher education, activism and social change, and experimental performance.

The college has repeatedly tangled with its accreditor in the past, with this month’s action coming a year, its president said, after it was removed from warning. A July 5 letter from the Western Association to the college’s president of seven years, Martin J. Hamilton, documents an ongoing financial crisis about as old as the college itself and a “pervasive failure” in proper recordkeeping. WASC also notes concerns about academic integrity at the college, including a “routine” reliance upon independent study that operates outside of published criteria or oversight. The accrediting body indicates that it found “substantial evidence of violations” of its first standard, that an institution “function with integrity.” (The letter is available on the San Francisco Bay Guardian’s blog).

Continued in article

Bob Jensen's threads on higher education controversies are at http://www.trinity.edu/rjensen/HigherEdControversies.htm


Reporting Assessment Data is No Big Deal for For-Profit Learning Institutions

"What Took You So Long?" by Doug Lederman, Inside Higher Ed, June 15, 2007 --- http://www.insidehighered.com/news/2007/06/15/cca

You’d have been hard pressed to attend a major higher education conference over the last year where the work of the Secretary of Education’s Commission on the Future of Higher Education and the U.S. Education Department’s efforts to carry it out were not discussed. And they were rarely mentioned in the politest of terms, with faculty members, private college presidents, and others often bemoaning proposals aimed at ensuring that colleges better measure the learning outcomes of their students and that they do so in more readily comparable ways.

The annual meeting of the Career College Association, which represents 1,400 mostly for-profit and career-oriented colleges, featured its own panel session Thursday on Education Secretary Margaret Spellings’ various “higher education initiatives,” and it had a very different feel from comparable discussions at meetings of public and private nonprofit colleges. The basic theme of the panelists and the for-profit college leaders in the audience at the New Orleans meeting was: “What’s the big deal? The government’s been holding us accountable for years. Deal with it.”

Ronald S. Blumenthal, vice president for operations and senior vice president for administration at Kaplan Higher Education, who moderated the panel, noted that the department’s push for some greater standardization of how colleges measure the learning and outcomes of their students is old hat for institutions that are accredited by “national” rather than “regional” accreditors, as most for-profit colleges are. For nearly 15 years, ever since the Higher Education Act was renewed in 1992, national accreditors have required institutions to report placement rates and other data, and institutions that perform poorly compared to their peers risk losing accreditation.

“These are patterns that we’ve been used to for more than 10 years,” said Blumenthal, who participated on the Education Department negotiating panel that considered possible changes this spring in federal rules governing accreditation. “But the more traditional schools have not done anything like that, and they don’t want to. They say it’s too much work, and they don’t have the infrastructure. We had to implement it, and we did did implement it. So what if it’s more work?,” he said, to nods from many in the audience.

Geri S. Malandra of the University of Texas System, another member of the accreditation negotiating team and a close adviser to Charles Miller, who headed the Spellings Commission and still counsels department leaders, said that nonprofit college officials (and the news media, she suggested) often mischaracterized the objectives of the commission and department officials as excessive standardization.

“Nobody was ever saying, there is one graduation rate for everyone regardless of the program,” Malandra said. “You figure out for your sector what makes sense as the baseline. No matter how that’s explained, and by whom, the education secretary or me, it still gets heard as one-size-fits-all, a single number, a ‘bright line’ ” standard. “I don’t think it was ever intended that way.”

The third panelist, Richard Garrett, a senior analyst at Eduventures, an education research and consulting company, said the lack of standardized outcomes measures in higher education “can definitely be a problem” in terms of gauging which institutions are actually performing well. “It’s easy to accuse all parts of higher education of having gone too far down the road of diversity” of missions and measures, Garrett said.

“On the other hand,” said Garrett, noting that American colleges have long been the envy of the world, “U.S. higher education isn’t the way it is because of standardization. It is as successful as it is because of diversity and choice and letting a thousand flowers bloom,” he said, offering a voice of caution that sounded a lot like what one might have heard at a meeting of the National Association of Independent Colleges and Universities or the American Federation of Teachers.


"Accreditation: A Flawed Proposal," by Alan L. Contreras, Inside Higher Ed, June 1, 2006 --- http://www.insidehighered.com/views/2006/06/01/contreras

A recent report released by the Secretary of Education’s Commission on the Future of Higher Education recommends some major changes in the way accreditation operates in the United States. Perhaps the most significant of these is a proposal that a new accrediting framework “require institutions and programs to move toward world-class quality” using best practices and peer institution comparisons on a national and world basis. Lovely words, and utterly fatal to the proposal.

he principal difficulty with this lofty goal is that outside of a few rarefied contexts, most people do not want our educational standards to get higher. They want the standards to get lower. The difficulty faced by the commission is that public commissions are not allowed to say this out loud because we who make policy and serve in leadership roles are supposed to pretend that people want higher standards.

In fact, postsecondary education for most people is becoming a commodity. Degrees are all but generic, except for those people who want to become professors or enter high-income professions and who therefore need to get their degrees from a name-brand graduate school.

The brutal truth is that higher standards, applied without regard for politics or any kind of screeching in the hinterlands, would result in fewer colleges, fewer programs, and an enormous decrease in the number and size of the schools now accredited by national accreditors. The commission’s report pretends that the concept of regional accreditation is outmoded and that accreditors ought to in essence be lumped together in the new Great Big Accreditor, which is really Congress in drag.

This idea, when combined with the commitment to uniform high standards set at a national or international level, results in an educational cul-de-sac: It is not possible to put the Wharton School into the same category as a nationally accredited degree-granting business college and say “aspire to the same goals.”

The commission attempts to build a paper wall around this problem by paying nominal rhetorical attention to the notion of differing institutional missions. However, this is a classic question-begging situation: if the missions are so different, why should the accreditor be the same for the sake of sameness? And if all business schools should aspire to the same high standards based on national and international norms, do we need the smaller and the nationally accredited business colleges at all?

The state of Oregon made a similar attempt to establish genuine, meaningful standards for all high school graduates starting in 1991 and ending, for most purposes, in 2006, with little but wasted money and damaged reputations to show for it. Why did it fail? Statements of educational quality goals issued by the central bureaucracy collided with the desire of communities to have every student get good grades and a diploma, whether or not they could read, write or meet minimal standards. Woe to any who challenge the Lake Wobegon Effect.

So let us watch the commission, and its Congressional handlers, as it posits a nation and world in which the desire for higher standards represents what Americans want. This amiable fiction follows in a long history of such romans a clef written by the elite, for the elite and of the elite while pretending to be what most people want. They have no choice but to declare victory, but the playing field will not change.

Alan L. Contreras has been administrator of the Oregon Office of Degree Authorization, a unit of the Oregon Student Assistance Commission, since 1999. His views do not necessarily represent those of the commission.

Online Curriculum and Certification
"Online Courses Offered to Smaller Colleges," T.H.E. Journal, September 2001, Page 16 --- http://www.thejournal.com/magazine/vault/A3621.cfm 

Carnegie Technology Education (CTE) is providing up-to-date curriculum and certification to community and smaller, four-year colleges. The courses are designed by experts in online curriculum development in conjunction with faculty at Carnegie Mellon University's School of Computer Science. CTE combines live classroom instruction with online courses delivered over an advanced Web-based system that not only provides access at any time or place, but supports homework, testing, feedback, grading and student-teacher communication.

CTE serves as a mentor to faculty at partner colleges through a unique online process, guiding them throughout the teaching experience and providing help-desk assistance, Internet-based testing, materials and tools. CTE also promotes faculty development at partner institutions by helping faculty keep pace with technology changes and real-world industry demands. The program's online delivery method makes it possible to constantly update course content, as well as continually improve the effectiveness of teaching and testing materials.

By allowing colleges to outsource IT curriculum and faculty training, CTE helps institutions avoid the large investments necessary to build similar capabilities within their department. CTE's curriculum and teacher training can also be a competitive advantage to help colleges attract and retain qualified faculty. Carnegie Technology Education, Pittsburgh, PA, (412) 268-3535, www.carnegietech.org .

Accreditation Alternatives --- http://businessmajors.about.com/library/weekly/aa050499.htm 


"Missed Connections Online colleges complain about traditional institutions' tough credit-transfer policies," by Dan Carnevale, The Chronicle of Higher Education, October 18, 2002 --- http://chronicle.com/free/v49/i08/08a03501.htm 

TAKING CREDIT

Students who take courses from online colleges that have national accreditation, rather than the regional accreditation held by most traditional colleges, often have difficulty transferring their credits to traditional colleges. Here are some of the institutions that have granted transfer credit, or have agreed to transfer credits in the future, for courses taught at American Military University, which is nationally accredited but not regionally accredited:
Many colleges refuse to grant credit for courses at American Military University, including the following:

Continued at http://chronicle.com/free/v49/i08/08a03501.htm  


From the Syllabus News on December 24, 2001

Commerce Bancorp, Inc., which calls itself "America's Most Convenient Bank," said training courses provided through its Commerce University have received expanded credit recommendations from the American Council on Education (ACE). The bank, whose employees can receive college credit through the program, has received credit recommendations for two customer service training programs. Employees may apply the credit recommendations to college degree programs in which they are participating. Commerce University offers nearly 1,700 courses to employees each year via seven schools related to its areas of operation, including its School of Retail Banking, School of Lending, and School of Insurance.

For more information, visit: http://commerceonline.com

Bob Jensen's threads on distance education and training courses can be found at http://www.trinity.edu/rjensen/000aaa/0000start.htm 


From Infobits on July 27, 2001

VISIBLE KNOWLEDGE PROJECT

The Visible Knowledge Project (VKP) is a five-year collaborative project focused on "improving the quality of college and university teaching through a focus on both student learning and faculty development in technology-enhanced environments."

In the course of the project faculty on twenty-five campuses will "design and conduct systematic classroom research experiments focused on how certain student-centered pedagogies, enhanced by a variety of new technologies, improve higher order thinking skills and significant understanding in the study of history, literature, culture, and related interdisciplinary fields."

Resources generated by the project will include: -- a set of curriculum modules representing the reflective work of the faculty investigators; -- three research monographs capturing the findings of the project; -- a set of multimedia faculty development resources; -- a set of guides, directed at students, for novice learners to better use primary historical and cultural material on the Internet; and -- a set of online faculty development and support seminars, for the investigating faculty, faculty on the core campuses, and graduate students participating in the Project's professional development programs.

For more information about VKP, link to http://crossroads.georgetown.edu/vkp/ 

The Visible Knowledge Project is based at Georgetown University's Center for New Designs in Learning and Scholarship (CNDLS). For more information about CNDLS, see their website at http://candles.georgetown.edu/ 

Project partners include the American Studies Association's Crossroads Project, the Center for History and New Media (George Mason University), the American Social History Project (CUNY Graduate Center), the Carnegie Foundation for the Advancement of Teaching, and the TLT Group with the American Association for Higher Education.


From Infobits on July 27, 2001

NEW JOURNAL ON INFORMATION AND COMPUTER SCIENCES TEACHING AND LEARNING

INNOVATIONS IN TEACHING AND LEARNING IN INFORMATION AND COMPUTER SCIENCES ELECTRONIC JOURNAL (ITALICS) is a new a peer-reviewed online journal published by the Learning and Teaching Support Network Centre for Information and Computer Sciences (LTSN-ICS). ITALICS Electronic Journal will contain papers on current information and computer sciences teaching, including: developments in computer-based learning and assessment; open learning, distance learning, collaborative learning, and independent learning approaches; staff development; and the impact of subject centers on learning and teaching. 

The journal is available, at no cost, at http://www.ics.ltsn.ac.uk/pub/italics/index.html


The Changing Faces of Virtual Education --- http://www.col.org/virtualed/ 
Dr. Glen Farrell, Study Team Leader and Editor
The Commonwealth of Learning

RELEASED IN JULY 2001 by The Commonwealth of Learning (COL): The Changing Faces of Virtual Education, a study on the latest “macro developments” in virtual education. This is a follow-up on COL’s landmark study on current trends in “virtual” delivery of higher education (The Development of Virtual Education: A global perspective, 1999). Both reports were funded by the British Department for International Development and are available on this web site.

One of the conclusions of the authors of the 1999 report was that the development of virtual education was “more rhetorical than real!” Dr. Glen Farrell, study team leader and editor of both reports, says “This follow-up study concludes that, two years later, virtual education development is a lot more rhetorical, and a lot more real!”

In terms of the rhetoric, virtual education is now part of the planning agenda of most organisations concerned with education and training. And the terminology being used to describe the activities is even more imprecise and confusing! On the reality side, there are many more examples of the use of virtual education in ways that add value to existing, more traditional delivery models. However, a remarkable feature of this surging interest in virtual education is that it remains largely focussed on ways to use technology to deliver the traditional educational products (i.e., programmes and courses) in ways that make them more accessible, flexible, and cheaper and that can generate revenues for the institution.

As global discussions on closing the “digital divide” have observed, it is not surprising that the report notes that a major feature of the current state of virtual education development is that it depends on where you live. The growth is largely occurring in countries with mature economies and established information and communication infrastructure (ICTs). A lack of such infrastructure, together with the lack of development capital, means that the developing countries of the world have not been able to, as yet, use virtual education models in their efforts to bring mass education opportunities to their citizens.

However, the report demonstrates that there are several trends emerging that are likely to bring about radical changes to the way we think about the concepts of campus, curriculum, courses, teaching/learning processes, credentials/awards and the way ICTs can be utilised to enable and support learning. These trends, called “macro developments” in the report, include new venues for learning, the use of “learning objects” to define and store content, new organisational models, online learner support services, quality assurance models for virtual education and the continuing evolution of ICTs. Each of these “macro developments” is defined and described in separate chapters of the report. The final chapter looks at their impact on the development of virtual education models in the future. While the conclusions will be of general interest, particular attention has been paid to the role these developments are likely to have in the evolution of virtual education systems in developing countries.

The entire study is available on-line from this page. By clicking on the various hyperlinks below you will be able to download and open the individual chapters or the entire book in Acrobat (.PDF) format. (The chapter files are not created with internal bookmark hyperlinks, but the all-in-one file has bookmarks throughout for easier navigation.) Acrobat documents can also be resized on screen for readability but are usually best viewed when printed. Adobe Acrobat version 3.0 is required to download and read the files. With version 4.0 each Chapter's actual page numbering is retained in Acrobat's "Go To Page" facility and "Print Range" selections.

The Changing Faces of Virtual Education

CHAPTER FILES TO VIEW OR DOWNLOAD IN PDF FORMAT    

Preliminary pages: title page, copyright page, contents   (pg. i-iv) 160kb

Foreword, Prof. Gajaraj Dhanarajan and Acknowledgements   (pg. v-viii) 120kb  

Chapter 1:     Introduction, Dr. Glen M. Farrell   (pg. 1-10) 234kb  

Chapter 2:    The Changing Venues for Learning, Mr. Vis Naidoo   (pg. 11-28) 307kb

Chapter 3:    The Continuing Evolution of ICT Capacity: The Implications for Education, 
                      Dr. Tony Bates   (pg. 29-46) 335kb

Chapter 4:    Object Lessons for the Web: Implications for Instructional Development, 
                      Mr. David Porter   (pg. 47-70) 639kb

Chapter 5:    The Provision of Learner Support Services Online, Dr. Yoni Ryan   (pg. 71-94) 389kb

Chapter 6:    The Development of New Organisational Arrangements in Virtual Learning, 
                      Dr. Peter J. Dirr    (pg. 95-124) 448kb

Chapter 7:    Quality Assurance, Ms. Andrea Hope    (pg. 125-140) 304kb

Chapter 8:    Issues and Choices, Dr. Glen Farrell    (pg. 141-152) 247kb

Note especially that Andrea Hope's Chapter 7 deals with assessment issues.  She mentions three sites that attempt to week out suspicious degree programs.

degree.net --- http://www.degree.net/ (note the links to accreditation issues at http://www.degree.net/guides/accreditation.html )

Most of the calls and e-mail messages we get concern accreditation: What is it, how important is it, how can you tell if a school's really accredited, and so forth. While accreditation is a complex and sometimes baffling field, it's really quite simple to get the basics. This on-line guide offers you:

All About Accreditation: A brief overview of what you really need to know about accreditation, including GAAP (Generally Accepted Accrediting Practices). Yes, there really are fake accrediting agencies, and yes some disreputable schools do lie. This simple set of rules tells how to sort out truth from fiction. (The acronym is, of course, borrowed from the field of accounting. GAAP standards are the highest to which accountants can be held, and we feel that accreditation should be viewed as equally serious.)

GAAP-Approved Accrediting Agencies: A listing of all recognized accrediting agencies, national, regional, and professional, with links that will allow you to check out schools.

Agencies Not Recognized Under GAAP: A list of agencies that have been claimed as accreditors by a number of schools, some totally phony, some well-intentioned but not recognized.

FAQs: Some simple questions and answers about accreditation and, especially, unaccredited schools

AboutEducation at http://www.about.com/education/ 

Adult/Continuing Education
Adult/Continuing Education
Distance Learning
Votech Education

 

College/University
Business Majors
College Admissions: U.S.
College Life
Graduate School
International Education
Job Searching: College Grads

 

 

Education Partners
Contentville

 

Primary/Secondary Education
Creative Writing for Teens
Daycare/Preschool
Elementary Educators
Family Crafts
Homeschooling
Private Schools
Secondary School Educators
Special Education
Teachers: Canada

 

 


Also Recommended
AtoZTeacherStuff
ExamPractice
Inspiring Teachers
  LessonPlansPage
LessonPlanz
Search4Colleges

WorldwideLearn --- http://www.worldwidelearn.com/ 

At this site you'll find hundreds of online courses and learning resources in 46 subject areas offered by educational institutions, companies and individuals from all over the world.

Online Training Long Distance Learning Distance Education eLearning Web-based Training Whatever you call it - learning online is about you and how you can pursue learning and education at your convenience. Its learning when you want and where you want.

What do you want to learn? Do you want to:

get a degree online train for a new career learn web design find corporate training resources take professional development courses learn new software continue your education learn a new skill or hobby

Whatever your goals are, World Wide Learn is here to help you find the online courses, learning and education that you want.

Use this site as your first step towards continuing your education online.

Other training and education finders are listed at http://www.trinity.edu/rjensen/crossborder.htm 


Linda Peters provides a frank overview of the various factors underlying student perceptions of online learning. Such perceptions, she observes, are not only informed by the student's individual situation (varying levels of computer access, for instance) but also by the student's individual characteristics: the student's proficiency with computers, the student's desire for interpersonal contact, or the student's ability to remain self-motivated --- 

Technology Source, a free, refereed, e-journal at http://horizon.unc.edu/TS/default.asp?show=issue&id=44 
IN THE SEPTEMBER/OCTOBER 2001 ISSUE


"Improving Student Performance in Distance Learning Courses," by Judy A. Serwatka, T.H.E. Journal, April 2002, pp. 46-51 --- http://www.thejournal.com/magazine/vault/A4002.cfm 

The tests were particularly problematic. Quizzes were not given for the on-campus course since it was an introductory course, and the students seemed to keep up well with the material. But I discovered the online students were not studying the appropriate material for the tests. To address this, online quizzes were introduced to the course Web site for the students to take as many times as they wanted. The scores are not recorded and the questions are in the same format as on the actual tests, although they are not exactly the same. Ten questions are chosen randomly from a bank of 20 for each quiz. In addition, each chapter has its own quiz. Students say they have found these quizzes to be invaluable.

The tests have been developed in a manner similar to the quizzes. Each 100-point test is created from a 200-question test bank. As each student logs in their test is created randomly from the test bank. This makes cheating extremely difficult because each test contains different questions. Even if the questions are the same, they are randomized so they do not appear in the same order. And although the test is open book, the students are admonished to study, because the questions are in random order and they do not have time to look up the answers to each question. The tests are timed and automatically submitted at the end of the time limit. The addition of these practice quizzes has dramatically improved performance on the tests.

A point about testing that should be made is that many educators are concerned about students finding someone else to take tests for them. I agree with the statement made by Palloff and Pratt (1999): "Cheating is irrelevant in this process because the participant would be cheating only him- or herself." Although attempts are made to minimize the threat, educators should not let this prevent them from teaching online. Tech-nology will allow educators to verify the identity of students taking online tests in the future, so educators must trust students for now.


September 22 message from Craig Polhemus [Joedpo@AOL.COM

A book by the same authors was included in the AAA's Faculty Development Bookshelf, which was undergoing a "slow shutdown" the last I head, so some discounted copies may still be available

Classroom Assessment Techniques: A Handbook for College Teachers (2nd Ed), T.A. Angelo and K.P. Cross, Jossey-Bass, San Francisco , 1993.

 ( This book is said to be a classic and provides useful examples of assessment techniques.)


Software for Online Examinations and Quizzes

Question
How can I give online examinations?

Answer
If it's a take home test the easiest thing is probably to put an examination up on a Web server or a Blackboard/WebCT server. For example, you might put up a Word doc file or an Excel xls file as a take home examination. You can even embed links to your Camtasia video files in that examination so that video becomes part of an examination question. Then have each student download the exam, fill out the answers, and return the file to you via email attachment for grading. One risk is that the returned file might have a virus even though the student is not aware that his/her computer added a virus.

In order to avoid the virus risk of files students attach via email, I had an old computer that I used to open all email attachments from most anybody. Then in the rare event that the attached file was carrying a virus I did not infect my main machines. Good virus protection software is essential even on your old computer.

If students are restricted as to what materials can be used during examinations or who can be consulted for help, an approach that I used is examination partnering. I posted quizzes (not full examinations) at a common time when students were required to take the quiz. Each student was randomly assigned a partner student such that each partner took the exam in the presence of a randomly assigned partner. Each student was then required to sign an attest form saying that his/her partner abided by the rules of the examination. I only used this for weekly quizzes. Course examinations were given in class with me as a proctor. Partnered quizzes worked very well in courses where students had to master software like MS Access. They could perform software usage activities as part of the quiz.

Giving online interactive examinations via a Web server is more problematic. A huge problem is that most universities do not allow student feedback on instructors Web pages. When you fill a shopping cart at an online vendor site such as Amazon, Amazon is letting you as a customer send a signal back that you added something to your shopping cart. Amazon allows customers to send signals back to an Amazon server. Universities do not generally allow this type of feedback from students on a faculty Web server.

 

Believe it or not, I resist forwarding advertising. Whenever I communicate about products, there is no remuneration to me in any way.

The following message is an advertisement, and I have never tried these products (i.e., no free samples for Bob). But these products do sound interesting, so I thought you might like to know about them. It's a really competitive world for vendors of course authoring tools. Products have to have something special to be "survivors."

I added the product message below to the following sites:

Assessment and Testing --- http://www.trinity.edu/rjensen/assess.htm 

History of Course Authoring Systems --- http://www.trinity.edu/rjensen/290wp/290wp.htm 

February 25, 2004 from Leo Lucas [leo@e-learningconsulting.com

Hi Bob, thanks for providing information about authoring tools on http://www.trinity.edu/rjensen/290wp/290wp.htm. I have two new authoring tools that may be of interest to you and your readers.
 
e-Learning Course Development Kit
URL: http://www.e-learningconsulting.com/products/authoringtool.html
 
Many people use HTML editors such as Dreamweaver and FrontPage to create e-learning courses. While these editors are great for creating information they lack essential e-learning features. The e-Learning Course Development Kit provides these features. The Kit provides templates to create questions, course-wide navigation, a table of contents and links for a glossary and other information. The Kit creates courses that work with SCORM, a standard way to communicate with a Learning Management System (LMS). The support for SCORM lets you run the course in multiple sessions, keep track of bookmarks and record the student's progress through the course. The Kit can be purchased online for $99.
 
Test Builder
URL: http://www.e-learningconsulting.com/products/testbuilder.html
 
Test Builder lets you author tests quickly and easily with a text editor. Absolutely no programming is required. With Test Builder you can create tests and quizzes with true-false, multiple choice, fill-in-the-blank and matching questions. It can randomize the sequence of questions and choices and it can randomly select questions from a question pool. You can limit the number of attempts and set the passing score. Test Builder supports SCORM. Test Builder can be purchased online for $149.
 
We wanted to create e-learning tools that would work in an academic setting. So we created tools with these capabilities:
- The tools are affordable.
- They work for the casual user. You can create a small course or test without much fuss.
- They come with documented source code so you can modify or extend the tools to meet your specific needs.
- They add value to your existing investments in technology. They will deliver courses/tests in a browser and work with an LMS that supports SCORM 1.2.
 
Please let me know if you need more information about these tools. Thanks, Leo
 
Leo Lucas
leo@e-learningconsulting.com
www.e-learningconsulting.com
 
P.S. Your home in the white mountains is beautiful.

Hi Bob,

I recommend that you take a look at Exam Builder 4 at http://www.exambuilder.com/ 

Create a FREE evaluation account today and be up and running in 5 minutes with no obligation! 

My threads on assessment are at http://www.trinity.edu/rjensen/assess.htm 

Hope this helps!

Bob Jensen

Bob,

I've scheduled a health economics class in a computer lab this spring. The PCs are configured with their CRTs tightly packed. I'd like to be able to use the machines to give quizzes and exams, but the proximity of the CRTs makes at least casual "peeking" almost a certainty.

Can you suggest or point me to any software into which I could insert quiz or exam questions that would > shuffle the order of questions on the screen > shuffle the order of multiple choice questions > randomize the numbers in quantitative problems > keep track of the answers > automatically score the responses and send me a file of grades?

Back in the Apple II days, there was SuperPilot. But that language does not seem to have been successful enough to be ported to the IBM PCs say nothing about revised and improved. ??

Thanks for whatever thoughts you might be able to share,

Bob XXXXX

 


February 15, 2003 message from caking [caking@TEMPLE.EDU

Respondus has exam software for Blackboard, WebCt and others. I am just now trying it out --- http://www.respondus.com/ 

Carol King z
Temple University

 


The term "electroThenic portfolio," or "ePortfolio," is on everyone's lips.  What does this mean?

"The Electronic Portfolio Boom: What's it All About?," by Trent Batson, Syllabus, December 2002, pp. 14-18 --- http://www.syllabus.com/article.asp?id=6984 
(Including Open Knowledge Initiative OKI, Assessment, Accreditation, and Career Trends)

The term "electroThenic portfolio," or "ePortfolio," is on everyone's lips. We often hear it associated with assessment, but also with accreditation, reflection, student resumes, and career tracking. It's as if this new tool is the answer to all the questions we didn't realize we were asking.

A portfolio, electronic or paper, is simply an organized collection of completed work. Art students have built portfolios for decades. What makes ePortfolios so enchanting to so many is the intersection of three trends:

We've reached a critical mass, habits have changed, and as we reach electronic "saturation" on campus, new norms of work are emerging. Arising out of this critical mass is a vision of how higher education can benefit, which is with the ePortfolio.

We seem to be beginning a new wave of technology development in higher education. Freeing student work from paper and making it organized, searchable, and transportable opens enormous possibilities for re-thinking whole curricula: the evaluation of faculty, assessment of programs, certification of student work, how accreditation works. In short, ePortfolios might be the biggest thing in technology innovation on campus. Electronic portfolios have a greater potential to alter higher education at its very core than any other technology application we've known thus far.

The momentum is building. A year ago, companies I talked with had not even heard of ePortfolios. But at a focus session in October, sponsored by Educause's National Learning Infrastructure Initiative ( www.educause.edu/nlii/ ), we found out how far this market has come: A number of technology vendors and publishers are starting to offer ePortfolio tools. The focus session helped us all see the bigger picture. I came away saying to myself, "I knew it had grown, but I had no idea by how much!"

ePortfolio developers are making sure that their platforms can accept the full range of file types and content: text, graphics, video, audio, photos, and animation. The manner in which student work is turned in, commented on, turned back to students, reviewed in the aggregate over a semester, and certified can be—and is being—deeply altered and unimaginably extended.

This tool brings to bear the native talents of computers—storage, management of data, retrieval, display, and communication—to challenge how to better organize student work to improve teaching and learning. It seems, on the surface, too good to be true.

ePortfolios vs. Webfolios

Since the mid-90s, the term "ePortfolio" or "electronic portfolio" has been used to describe collections of student work at a Web site. Within the field of composition studies, the term "Webfolio" has also been used. In this article, we are using the current, general meaning of the term, which is a dynamic Web site that interfaces with a database of student work artifacts. Webfolios are static Web sites where functionality derives from HTML links. "E-portfolio" therefore now refers to database-driven, dynamic Web sites, not static, HTML-driven sites.

So, What's the Bad News?
Moving beyond the familiar one-semester/one-class limits of managing student learning artifacts gets us into unfamiliar territory. How do we alter the curriculum to integrate portfolios? How do we deal with long-term storage, privacy, access, and ongoing vendor support? What about the challenge of interoperability among platforms so student work can move to a new campus upon transfer?

In short, how do we make the ePortfolio an enterprise application, importing data from central computing, serving the application on a central, secure server, and managing an ever-enlarging campus system? Electronic portfolios have great reach in space and time so they will not be adopted lightly. We've seen how extensively learning management systems such as WebCT, Blackboard, and Angel can alter our campuses. ePortfolios are much more challenging for large-scale implementations.

Still, ePortfolio implementations are occurring on dozens if not hundreds of campuses. Schools of education are especially good candidates, as they're pressured by accrediting agencies demanding better-organized and accessible student work. Some statewide systems are adopting ePortfolio systems as well. The Minnesota State Colleges and Universities system and the University of Minnesota system have ePortfolios. Electronic portfolio consortia are also forming. The open-source movement, notably MIT's Open Knowledge Initiative (OKI), has embraced the ePortfolio as a key application within the campus computing virtual infrastructure.

Moreover, vendors, in order to establish themselves as the market begins to take shape, are already introducing ePortfolio tools. Several companies, including BlackBoard, WebCT, SCT, Nuventive, Concord, and McGraw-Hill, are said to either have or are developing electronic-portfolio tools.

 

ePortfolio Tools and Resources

Within the National Learning Infrastructure Initiative is a group called The Electronic Portfolio Action Committee (EPAC). EPAC has been led over the last year by John Ittelson of Cal State Monterey Bay. Helen Barrett of the University of Alaska at Anchorage, a leading founder of EPAC, has been investigating uses of ePortfolio tools for years. MIT's Open Knowledge Initiative (OKI) has provided leadership and consulting for the group, along with its OKI partner, Stanford University. The Carnegie Foundation has been active within EPAC, as have a number of universities.

What follows is a list of ePortfolio tools now available or in production:

• Epselen Portfolios, IUPUI, www.epsilen.com

• The Collaboratory Project, Northwestern, http://collaboratory.nunet.net

• Folio Thinking: Personal Learning Portfolios, Stanford, http://scil.stanford.edu/research/mae/folio.html

• Catalyst Portfolio Tool, University of Washington, www.catalyst.washington.edu

• MnSCU e-folio, Minnesota State Colleges and Universities, www.efoliomn.com

• Carnegie Knowledge Media Lab, Carnegie Foundation for the Advancement of Teaching, www.carnegiefoundation.org/kml/

• Learning Record Online (LRO) Project, The Computer Writing and Research Lab at the University of Texas at Austin, www.cwrl.utexas.edu/~syverson/olr/ contents.html

• Electronic Portfolio, Johns Hopkins University, www.cte.jhu.edu/epweb

• CLU Webfoil, California Lutheran University, www.folioworld.com

• Professional Learning Planner, Vermont Institute for Science, Math and Technology, www.vismt.org

• Certification Program Portfolio, University of Missouri-Columbia and LANIT Consulting, https://portfolio.coe.missouri.edu/

• Technology Portfolio and Professional Development Portfolio, Wake Forest University Department of Education, www.wfu.edu/~cunninac/edtech/technologyportfolio.htm

• e-Portfolio Project, The College of Education at the University of Florida, www.coe.ufl.edu/school/portfolio/index.htm

• PASS-PORT (Professional Accountability Support System using a PORTal Approach) University of Louisiana at Lafayette and Xavier University of Louisiana, www.thequest.state.la.us/training/

• The Connecticut College e-Portfolio Development Consortium, www.union.edu/PUBLIC/ECODEPT/kleind/ conncoll/

• The Kalamazoo College Portfolio, Kalamazoo College, www.kzoo.edu/pfolio

• Web Portfolio, St. Olaf College, www.stolaf.edu/depts/cis/web_portfolios.htm

• The Electronic Portfolio, Wesleyan University, https://portfolio2.wesleyan.edu/names.nsf?login

• The Diagnostic Digital Portfolio (DDP), Alverno College, www.ddp.alverno.edu/

• E-Portfolio Portal, University of Wisconsin-Madison, http://portfolios.education.wisc.edu/

• Web Folio Builder, TaskStream Tools of Engagement, www.taskstream.com

• FolioLive, McGraw-Hill Higher Education, www.foliolive.com

• Outcomes Assessment Solutions, TrueOutcomes, www.trueoutcomes.com/index.html

• Chalk & Wire, www.chalkandwire.com

• LiveText, www.livetext.com

• LearningQuest Professional Development Planner, www.learning-quest.com/

• Folio by eportaro, www.eportaro.com

• Concord (a digital content server for BlackBoard systems), www.concord-usa.com

• iWebfolio by Nuventive (now in a strategic alliance with SCT), www.iwebfolio.com

• Aurbach & Associates, www.aurbach.com/

Continued at http://www.syllabus.com/article.asp?id=6984 


Grade Inflation Versus Teaching Evaluations


How do you measure the best religion? The best marriage? Hard to say. The same is true in assessing colleges.
Bernard Fryshman, "Comparatively Speaking," Inside Higher Ed, February 21, 2007 --- http://www.insidehighered.com/views/2007/02/21/fryshman


Chocolate Coated Teaching Evaluations
A new study shows that giving students chocolate leads to improved results for professors. “Fudging the Numbers: Distributing Chocolate Influences Student Evaluations of an Undergraduate Course,” is set to be published in an upcoming edition of the journal Teaching of Psychology. While they were graduate students at the University of Illinois at Chicago, the paper’s authors, Benjamin Jee and Robert Youmans, became interested in what kind of environment instructors created right before handing out the evaluations. Their theory: Outside factors could easily play a role in either boosting or hurting a professor’s rating.
Elia Powers, "Sweetening the Deal," Inside Higher Ed, October 18, 2007 --- http://www.insidehighered.com/news/2007/10/18/sweets
Jensen Comment
One of my former colleagues left a candy dish full of chocolate morsels outside her door 24/7. She also had very high teaching evaluations. At last I know the secret of her success. I can vouch for the fact that his dish of chocolate, plus her chocolate chip cookies the size of pancakes, also greatly improved relations with at least one senior faculty member.

On a somewhat more serious side of things there is evidence, certainly not in the case of my cookie-baking colleague, that grade inflation is also linked to efforts to affect teaching evaluations in recent years. See below.


Question
What factors most heavily influence student performance and desire to take more courses in a given discipline?

Answer
These outcomes are too complex to be predicted very well. Sex and age of instructors have almost no impact. Teaching evaluations have a very slight impact, but there are just too many complexities to find dominant factors cutting across a majority of students.

Oreopoulos said the findings bolster a conclusion he came to in a previous academic paper that subjective qualities, such as how a professor fares on student evaluations, tell you more about how well students will perform and how likely they are to stay in a given course than do observable traits such as age or gender. (He points out, though, that even the subjective qualities aren’t strong indicators of student success.) “If I were concerned about improving teaching, I would focus on hiring teachers who perform well on evaluations rather than focus on age or gender,” he said.
Elia Powers, "Faculty Gender and Student Performance," Inside Higher Ed, June 21, 2007 --- http://www.insidehighered.com/news/2007/06/21/gender

Jensen Comment
A problem with increased reliance on teaching evaluations to measure performance of instructors is that this, in turn, tends to grade inflation --- See below.


For Trivia Buffs and Serious Researchers
Thousands of College Instructors Ranked on Just About Everything

November 13, 2007 message from David Albrecht [albrecht@PROFALBRECHT.COM]

There is a popular teacher in my department. When this fellow teaches a section of a multi-section course, his section fills immediately and there is a waiting list. My department does not like an imbalance in class size, so they monitor enrollment in his section. No one is permitted to add his section until all other sections have at least one more students than his.

I'm concerned about student choice, about giving them a fair chance to get into his section instead of the current random timing of a spot opening up in his section.

Does anyone else have this situation at your school? How do you manage student sign-ups for a popular teacher? Any practical suggestions would be greatly appreciated.

David Albrecht
Bowling Green

November 14, 2007 reply from Bob Jensen

Hi David,

I think the first thing to study is what makes an instructor so popular. There can be good reasons (tremendous preparation, inspirational, caring, knowing each student) and bad reasons (easy grader, no need to attend class), and questionable without ipso facto being good or bad (entertaining, humorous).

The RateMyProfessor site now has some information on most college instructors in a number of nations --- http://www.ratemyprofessors.com/index.jsp  The overwhelming factor leading to popularity is grading since the number one concern in college revealed by students is grading. Of course there are many problems in this database and many instructors and administrators refuse to even look at these RateMyProfessor archives. Firstly, student reporting is self selective. The majority of students in any class do not submit evaluations. A fringe element (often outliers for and against) tends to provide most of the information. Since colleges do know the class sizes, it is possible to get an idea about "sample" size, although these are definitely not a random samples. It's a little like book and product reviews in Amazon.com.

There are both instructors who are not rated at all on RateMyProfessor and others who are too thinly rated (e.g., less than ten evaluations) to have their evaluations taken seriously. For example, one of my favorite enthusiastic teachers is the award-winning Amy Dunbar who teaches tax at the University of Connecticut. Currently there are 82 instructors in the RateMyProfessor archives who are named Dunbar. But not a single student evaluation has apparently been sent in by the fortunate students of Amy Dunbar. Another one of my favorites is Dennis Beresford at the University of Georgia. But he only has one (highly favorable) evaluation in the archives. I suspect that there's an added reporting bias. Both Amy and Denny mostly teach graduate students. I suspect that graduate students are less inclined to fool with RateMyProfessor.

Having said this, there can be revealing information about teaching style, grading, exam difficulties, and other things factoring into good and bad teaching. Probably the most popular thing I've noted is that the top-rated professors usually get responses about making the class "easy." Now that can be taken two ways. It's a good thing to make difficult material seem more easy but still grade on the basis of mastering the difficult material. It is quite another thing to leave out the hard parts so students really do not master the difficult parts of the course.

If nothing else, RateMyProfessor says a whole lot about the students we teach. The first thing to note is how these college-level students often spell worse than the high school drop outs. In English classes such bad grammar may be intentional, but I've read enough term papers over the years to know that dependence upon spell checkers in word processors has made students worse in spelling on messages that they do not have the computer check for spelling. They're definitely Fonex spellers.

Many students, certainly not all, tend to prefer easy graders. For example, currently the instructor ranked Number 1 in the United States by RateMyProfessor appears to be an easy grader, although comments by only a few individual students should be taken with a grain of salt. Here's Page One (five out of 92 evaluations) of 19 pages of summary evaluations at http://www.ratemyprofessors.com/ShowRatings.jsp?tid=23294

11/13/07 HIST101 5 5 5 5   easiest teacher EVER
11/12/07 abcdACCT 1 1 1 1   good professor
11/11/07 HistGacct 3 2 4 1   Good teacher. Was enjoyable to heat teach. Reccomend class. Made my softmore year.
11/10/07 HISTACCT 5 5 5 5   Very genious.
11/8/07 histSECT 3 5 4 4   amazing. by far the greatest teacher. I had him for Culture and the Holocust with Schiffman and Scott. He is a genius. love him.

Does it really improve ratings to not make students have presentations? Although making a course easy is popular, is it a good thing to do? Here are the Page 3 (five out of 55 evaluations) ratings of the instructor ranked Number 2 in the United States:

12/21/05 Spanish 10
2
3 5 5 5   One of the best professors that I have ever had. Homework is taken up on a daily base but, grading is not harsh. No presentations.
11/2/05 SPA 102 4 5 5 3   Wow, a great teacher. Totally does not call people out and make them feel stupid in class, like a lot of spanish teachers. The homework is super easy quiz grades that can be returned with corrections for extra points. You have to take her for Spa 102!!!! You actually learn in this class but is fun too!
10/27/05 Span 102 4 5 5 5   I love Senora Hanahan. She is one of the best teachers I ever had. She is very clear and she is super nice. She will go out of her way just to make sure that you understand. I Love Her! I advise everyone to take her if you have a choice. She is great!!
9/14/05 SPA 201 4 5 5 5   I am absolutly not suprised that Senora Hanahan has smiley faces on every rating. She is awesme and fun.
8/25/05 SPA 102 4 5 5 5 envelope I LOVE her! Absolutely wonderful! Goes far out of her way to help you and remembers your needs always. She will call you at home if you tell her you need help, and she will do everything possible to keep you on track . I have no IDEA how she does it! She really wants you to learn the language. She's pretty and fun and absolutely wonderful!

 

Students, however, are somewhat inconsistent about grading and exam difficulties. For example, read the summary outcomes for the instructor currently ranked as Number 8 in the United States --- http://www.ratemyprofessors.com/ShowRatings.jsp?tid=182825
Note this is only one page out of ten pages of comments:

10/31/07 hpd110 5 3 2 4   she is pushing religion on us too much... she should be more open minded. c-lots is always forcing her faith based lessons down our throats. she makes me wanna puke.
10/14/07 PysEd100 1 1 1 1   She is no good in my opinion.
5/22/07 HPD110 5 5 5 5   Dr. Lottes is amazing! it is almost impossible to get lower than an A in her class as long as you show up. her lectures are very interesting and sometimes it's almost like going to therapy. the tests and activities are easy and during the test there are group sections so it'll help your test grades. she is very outgoing and fun! so take her!
12/7/06 HDP070 2 5 5 2   Grades the class really hard, don't take if you are not already physically fit. Otherwise, she's an amazing teacher. You can tell she really cares about her students.

Read the rest of the comments at http://www.ratemyprofessors.com/ShowRatings.jsp?tid=182825

 

It's possible to look up individual colleges and I looked up Bowling Green State University which is your current home base David. There are currently 1,322 instructors rated at Bowling Green. I then searched by the Department of Accounting. There are currently ten instructors rated. The highest rated professor (in terms of average evaluations) has the following Page One evaluations:

4/9/07 mis200 4 5 5 1 i admit, i don't like the class (mis200) since i think it has nothing to do with my major. but mr. rohrs isn't that hard, and makes the class alright.
4/5/07 mis200 3 4 4 1 Other prof's assign less work for this class, but his assignments aren't difficult. Really nice guy, helpful if you ask, pretty picky though.
4/4/07 Acct102 2 5 5 2 Easy to understand, midwestern guy. Doesn't talk over your head.
12/14/06 mis200 4 5 5 2 Kind of a lot of work but if you do good on it you will def do good...real cool guy
12/10/06 BA150 4 5 5 4 Mr. Rohrs made BA 150 actually somewhat enjoyable. He is very helpful and makes class as interesting as possible. He is also very fair with grading. Highly Recommend.

 

Your evaluations make me want to take your classes David. However, only 36 students have submitted evaluations. My guess is that over the same years you've taught hundreds of students. But my guess is that we can extrapolate that you make dull old accounting interesting and entertaining to students.

In answer to your question about dealing with student assignments to multiple sections I have no answers. Many universities cycle the pre-registration according to accumulated credits earned.. Hence seniors sign up first and first year students get the leftovers. Standby signups are handled according to timing much like airlines dole out standby tickets.

It is probably a bad idea to let instructors themselves add students to the course. Popular teachers may be deluged with students seeking favors, and some instructors do not know how to say no even though they may be hurting other students by admitting too many students. Fortunately, classes are generally limited by the number of seats available. Distance education courses do not have that excuse for limiting class size.

 

PS
For research and sometimes entertainment, it's interesting to read the instructor feedback comments concerning their own evaluations of RateMyProfessor --- http://www.mtvu.com/professors_strike_back/

You can also enter the word "humor" into the top search box and investigate the broad range of humor and humorous styles of instructors.

Bob Jensen

Also see the following:

Bob Jensen's threads on the dysfunctional aspects of teacher evaluations on grade inflation --- http://www.trinity.edu/rjensen/Assess.htm#GradeInflation


Question
What topic dominates instructor evaluations on RateMyProfessors.com (or RATE for short)?

"RateMyProfessors — or His Shoes Are Dirty," by Terry Caesar, Inside Higher Ed, July 28, 2006 --- http://www.insidehighered.com/views/2006/07/28/caesar

But the trouble begins here. Like those guests, students turn out to be candid about the same thing. Rather than sex, it’s grades. Over and over again, RATE comments cut right to the chase: how easy does the professor grade? If easy, all things are forgiven, including a dull classroom presence. If hard, few things are forgiven, especially not a dull classroom presence. Of course we knew students are obsessed with grades. Yet until RATE could we have known how utterly, unremittingly, remorselessly?

And now the obsession is free to roam and cavort, without the constraints of the class-by-class student evaluation forms, with their desiderata about the course being “organized” or the instructor having “knowledge of subject matter.” These things still count. RATE students regularly register them. But nothing counts like grades. Compared to RATE, the familiar old student evaluation forms suddenly look like searching inquiries into the very nature of formal education, which consists of many other things than the evaluative dispositions of the professor teaching it.

What other things? For example, whether or not the course is required. Even the most rudimentary of student evaluation forms calls for this information. Not RATE. Much of the reason a student is free to go straight for the professorial jugular — and notwithstanding all the praise, the site is a splatfest — is because course content can be merrily cast aside. The raw, visceral encounter of student with professor, as mediated through the grade, emerges as virtually the sole item of interest.

Of course one could reply: so what? The site elicits nothing else. That’s why it’s called, “rate my professors,” and not “rate my course.” In effect, RATE takes advantage of the slippage always implicit in traditional student evaluations, which both are and are not evaluations of the professor rather than the course. To be precise, they are evaluations of the professor in terms of a particular course. This particularity, on the other hand, is precisely what is missing at the RATE site, where whether or not a professor is being judged by majors — a crucial factor for departmental and college-wide tenure or promotion committees who are processing an individual’s student evaluations — is not stipulated.

Granted, a student might bring up being a major. A student might bring anything up. This is why RATE disappoints, though, because there’s no framework, not even that of a specific course, to restrain or guide student comments. “Sarcastic” could well be a different thing in an upper-division than in a lower-division course. But in the personalistic RATE idiom, it’s always a character flaw. Indeed, the purest RATE comments are all about character. Just as the course is without content, the professor is without performative ability. Whether he’s a “nice guy” or she “plays favorites,” it’s as if the student has met the professor a few times at a party, rather than as a member of his or her class for a semester.

RATE comments are particularly striking if we compare those made by the professor’s colleagues as a result of classroom observations. Many departments have evolved extremely detailed checksheets. I have before me one that divides the observation into four categories, including Personal Characteristics (10 items), Interpersonal Relationships (8), Subject Application/Knowledge (8), and Conducting Instruction (36). Why so many in the last category? Because performance matters — which is just what we tell students about examinations: each aims to test not so much an individual’s knowledge as a particular performance of that knowledge.

Of course, some items on the checksheet are of dubious value, e.g. “uses a variety of cognitive levels when asking questions.” So it goes in the effort to itemize successful teaching, an attempt lauded by proponents of student evaluations or lamented by critics. The genius of RATE is to bypass the attempt entirely, most notoriously with its “Hotness Total.” Successful teaching? You may be able to improve “helpfulness” or “clarity.” But you can’t very well improve “hotness.” Whether or not you are a successful teacher is not safely distant at RATE from whether or not you are “hot.”

Perhaps it never was. In calling for a temperature check, RATE may merely be directly addressing a question — call it the charisma of an individual professor — that traditional student evaluations avoid. If so, though, they avoid it with good reason: charisma can’t be routinized. When it is, it becomes banal, which is one reason why the critical comments are far livelier than the celebratory ones. RATE winds up testifying to one truism about teaching: It’s a lot easier to say what good teaching isn’t than to say what it is. Why? One reason is, because it’s a lot easier for students who care only about teachers and not about teaching to say so.

Finally, what about these RATE students? How many semester hours have they completed? How many classes did they miss? It is with good reason (we discover) that traditional student evaluation forms are careful to ask something about each student. Not only is it important for the administrative processing of each form. Such questions, even at a minimal level, concede the significance in any evaluation of the evaluating subject. Without some attention to this, the person under consideration is reduced to the status of an object — which is, precisely, what the RATE professor becomes, time after time. Students on RATE provide no information at all about themselves, not even initials or geographical locations, as given by many of the people who rate books and movies on amazon.com or who give comments on columns and articles on this Web site.

In fact, students at RATE don’t even have to be students! I know of one professor who was so angered at a comment made by one of her students that she took out a fake account, wrote a more favorable comment about herself, and then added more praise to the comments about two of her colleagues. How many other professors do this? There’s no telling — just as there’s no telling about local uses of the site by campus committees. Of course this is ultimately the point about RATE: Even the student who writes in the most personal comments (e.g. “hates deodorant") is completely safe from local retribution — never mind accountability — because the medium is so completely anonymous.

Thus, the blunt energies of RATE emerge as cutting edge for higher education in the 21st century. In this respect, the degree of accuracy concerning any one individual comment about any one professor is beside the point. The point is instead the medium itself and the nature of the judgements it makes possible. Those on display at RATE are immediate because the virtual medium makes them possible, and anonymous because the same medium requires no identity markers for an individual. Moreover, the sheer aggregation of the site itself — including anybody from anywhere in the country — emerges as much more decisive than what can or cannot be said on it. I suppose this is equivalent to shrugging, whatever we think of RATE, we now have to live with it.

I think again of the very first student evaluation I received at a T.A. The result? I no longer remember. Probably not quite as bad as I feared, although certainly not as good as I hoped. The only thing I remember is one comment. It was made, I was pretty sure, by a student who sat right in the front row, often put her head down on the desk (the class was at 8 a.m.) and never said a word all semester. She wrote: “his shoes are dirty.” This shocked me. What about all the time I had spent, reading, preparing, correcting? What about how I tried to make available the best interpretations of the stories required? My attempts to keep discussions organized, or just to have discussions, rather than lectures?

All irrelevant, at least for one student? It seemed so. Worse, I had to admit the student was probably right — that old pair of brown wingtips I loved was visibly becoming frayed and I hadn’t kept them shined. Of course I could object: Should the state of a professor’s shoes really constitute a legitimate student concern? Come to this, can’t you be a successful teacher if your shoes are dirty? In today’s idiom, might this not even strike at least some students all by itself as being, well, “hot"? In any case, I’ve never forgotten this comment. Sometimes it represents to me the only thing I’ve ever learned from reading my student evaluations. I took it very personally once and I cherish it personally still.

Had it appeared on RATE, however, the comment would feel very different. A RATE[D] professor is likely to feel like a contestant on “American Idol,” standing there smiling while the results from the viewing audience are totaled. What do any of them learn? Nothing, except that everything from the peculiarities of their personalities to, ah, the shine of their shoes, counts. But of course as professors we knew this already. Didn’t we? Of course it might always be good to learn it all over again. But not at a site where nobody’s particular class has any weight; not in a medium in which everybody’s words float free; and not from students whose comments guarantee nothing except their own anonymity. I’ll bet some of them even wear dirty shoes.

July 28, 2006 reply from Alexander Robin A [alexande.robi@UWLAX.EDU]

Two quotes from a couple of Bob Jensen's recent posts:

"Of course we knew students are obsessed with grades." (from the RateMyProfessors thread)

"The problem is that universities have explicit or implicit rankings of "journal quality" that is largely dictated by research faculty in those universities. These rankings are crucial to promotion, tenure, and performance evaluation decisions." (from the TAR thread)

These two issues are related. First, students are obsessed with grades because universities, employers and just about everyone else involved are obsessed with grades. One can also say that faculty are obsessed with publications because so are those who decide their fates. In these two areas of academia, the measurement has become more important than the thing it was supposed to measure.

For the student, ideally the learning is the most important outcome of a class and the grade is supposed to reflect how successful the learning was. But the learning does not directly and tangibly affect the student - the grade does. In my teaching experience students, administrators and employers saw the grade as being the key outcome of a class, not the learning.

Research publication is supposed to result from a desire to communicate the results of research activity that the researcher is very interested in. But, especially in business schools, this has been turned on its head and the publication is most important and the research is secondary - it's just a means to the publication, which is necessary for tenure, etc.

It's really a pathetic situation in which the ideals of learning and discovery are largely perverted. Had I fully understood the magnitude of the problem, I would have never gone for a PhD or gotten into teaching. As to what to do about it, I really don't know. The problems are so deeply entrenched in academic culture. Finally I just gave up and retired early hoping to do something useful for the rest of my productive life.

Robin Alexander

Bob Jensen's threads on teaching evaluations are at http://www.trinity.edu/rjensen/assess.htm#TeachingStyle

Bob Jensen's threads on teaching evaluations and learning styles are at http://www.trinity.edu/rjensen/assess.htm#LearningStyles


 


Dumbing Education Down

President George W. Bush's signature education reform -- the No Child Left Behind Act -- is coming in for a close inspection in Congress. And, it seems, members on both sides of the aisle have plenty of ideas of how to tinker with NCLB. But almost nobody is talking about the law's central flaw: Its mandate that every American schoolchild must become "proficient" in reading and math while not defining what "proficiency" is. The result of this flaw is that we now have a patchwork of discrepant standards and expectations that will, in fact, leave millions of kids behind, foster new (state-to-state) inequities in education quality, and fail to give the United States the schools it needs to compete globally in the 21st century . . . Meanwhile, the federal mandate to produce 100% proficiency fosters low standards, game-playing by states and districts, and cynicism and rear-end-covering by educators. Tinkering with NCLB, as today's bills and plans would do, may ease some of the current law's other problems. But until lawmakers muster the intestinal fortitude to go after its central illusions, America's needed education makeover is not going to occur.
Chester E. Finn Jr., "Dumbing Education Down, The Wall Street Journal, October 5, 2007; Page A16 --- Click Here
Mr. Finn is a senior fellow at Stanford's Hoover Institution and president of the Thomas B. Fordham Institute.


NCLB = No Child Left Behind Law
A September 2007 Thomas B. Fordham Institute report found NCLB's assessment system "slipshod" and characterized by "standards that are discrepant state to state, subject to subject, and grade to grade." For example, third graders scoring at the sixth percentile on Colorado's state reading test are rated proficient. In South Carolina the third grade proficiency cut-off is the sixtieth percentile.
Peter Berger, "Some Will Be Left Behind," The Irascible Professor, November 10, 2007 --- http://irascibleprofessor.com/comments-11-10-07.htm


"Beyond Merit Pay and Student Evaluations," by James D. Miller, Inside Higher Ed, September 8, 2007 --- http://www.insidehighered.com/views/2007/09/07/miller 

What tools should colleges use to reward excellent teachers? Some rely on teaching evaluations that students spend only a few minutes filling out. Others trust deans and department chairs to put aside friendships and enmities and objectively identify the best teachers. Still more colleges don’t reward teaching excellence and hope that the lack of incentives doesn’t diminish teaching quality.

I propose instead that institutions should empower graduating seniors to reward teaching excellence. Colleges should do this by giving each graduating senior $1,000 to distribute among their faculty. Colleges should have graduates use a computer program to distribute their allocations anonymously.

My proposal would have multiple benefits. It would reduce the tension between tenure and merit pay. Tenure is supposed to insulate professors from retaliation for expressing unpopular views in their scholarship. Many colleges, however, believe that tenured professors don’t have sufficient incentives to work hard, so colleges implement a merit pay system to reward excellence. Alas, merit pay can be a tool that deans and department heads use to punish politically unpopular professors. My proposal, however, provides for a type of merit pay without giving deans and department heads any additional power over instructors. And because the proposal imposes almost no additional administrative costs on anyone, many deans and department heads might prefer it to a traditional merit pay system.

Students, I suspect, would take their distribution decisions far more seriously than they do end-of-semester class evaluations. This is because students are never sure how much influence class evaluations have on teachers’ careers, whereas the link between their distributions and their favorite teachers’ welfare would be clear. Basing merit pay on these distributions, therefore, will be “fairer” than doing so based on class evaluations. Furthermore, these distributions would provide very useful information to colleges in making tenure decisions or determining whether to keep employing a non-tenure track instructor.

The proposal would also reward successful advising. A good adviser can make a student’s academic career. But since advising quality is difficult to measure, colleges rarely factor it into merit pay decisions. But I suspect that many students consider their adviser to be their favorite professor, so great advisers would be well rewarded if graduates distributed $1,000 among faculty.

Hopefully, these $1,000 distributions would get students into the habit of donating to their alma maters. The distributions would show graduates the link between donating and helping parts of the college that they really liked. Colleges could even ask their graduates to “pay back” the $1,000 that they were allowed to give their favorite teachers. To test whether the distributions really did increase alumni giving, a college could randomly choose, say, 10 percent of a graduating class for participation in my plan and then see if those selected graduates did contribute more to the college.

My reward system would help a college attract star teachers. Professors who know they often earn their students adoration will eagerly join a college that lets students enrich their favorite teachers.

Unfortunately, today many star teachers are actually made worse off because of their popularity. Students often spend much time talking to star teachers, make great use of their office hours and frequently ask them to write letters of recommendation. Consequently, star teachers have less time than average faculty members do to conduct research. My proposal, though, would help correct the time penalty that popularity so often imposes on the best teachers.

College trustees and regents who have business backgrounds should like my idea because it rewards customer-oriented professors. And anything that could persuade trustees to increase instructors’ compensation should be very popular among faculty.

But my proposal would be the most popular among students. It would signal to students that the college is ready to trust them with some responsibility for their alma mater’s finances. It would also prove to students that the way they have been treated at college is extremely important to their school.

James D. Miller is an associate professor of economics at Smith College.

Jensen Comment
One-time "gifts" to teachers are not the same as salary increases that are locked in year after year after year until the faculty member resigns or retires. It is also extremely likely that this type of reward system might be conducive to grade inflation popularity contests. Also some students might ask why they are being charged $1,000 more in tuition to be doled out as bonuses selectively to faculty.

But by far the biggest flaw in this type of reward system is the bias toward large class sections. Some of the most brilliant research professors teach advanced-level courses to much smaller classes than instructors teaching larger classes to first and second year students. Is it a good idea for a top specialist to abandon his advanced specialty courses for majors in order to have greater financial rewards for teaching basic courses that have more students at a very elementary level?

Bob Jensen's threads on higher education controversies are at http://www.trinity.edu/rjensen/HigherEdControversies.htm


Question
Guess which parents most strongly object to grade inflation?

Hint: Parents Say Schools Game System, Let Kids Graduate Without Skills

The Bredemeyers represent a new voice in special education: parents disappointed not because their children are failing, but because they're passing without learning. These families complain that schools give their children an easy academic ride through regular-education classes, undermining a new era of higher expectations for the 14% of U.S. students who are in special education. Years ago, schools assumed that students with disabilities would lag behind their non-disabled peers. They often were taught in separate buildings and left out of standardized testing. But a combination of two federal laws, adopted a quarter-century apart, have made it national policy to hold almost all children with disabilities to the same academic standards as other students.
John Hechinger and Daniel Golden, "Extra Help:  When Special Education Goes Too Easy on Students," The Wall Street Journal, August 21, 2007, Page A1 ---  http://online.wsj.com/article/SB118763976794303235.html?mod=todays_us_page_one

Bob Jensen's fraud updates are at http://www.trinity.edu/rjensen/FraudUpdates.htm


A Compelling Case for Reforming the Current Teaching Evaluation Process

"Bias, the Brain, and Student Evaluations of Teaching," by Debrorah Jones Merritt, Ohio State University College of Law, SSRN, January 2007 --- http://papers.ssrn.com/sol3/papers.cfm?abstract_id=963196

Student evaluations of teaching are a common fixture at American law schools, but they harbor surprising biases. Extensive psychology research demonstrates that these assessments respond overwhelmingly to a professor's appearance and nonverbal behavior; ratings based on just thirty seconds of silent videotape correlate strongly with end-of-semester evaluations. The nonverbal behaviors that influence teaching evaluations are rooted in physiology, culture, and habit, allowing characteristics like race and gender to affect evaluations. The current process of gathering evaluations, moreover, allows social stereotypes to filter students' perceptions, increasing risks of bias. These distortions are inevitable products of the intuitive, “system one” cognitive processes that the present process taps. The cure for these biases requires schools to design new student evaluation systems, such as ones based on facilitated group discussion, that enable more reflective, deliberative judgments. This article, which will appear in the Winter 2007 issue of the St. John's Law Review, draws upon research in cognitive decision making, both to present the compelling case for reforming the current system of evaluating classroom performance and to illuminate the cognitive processes that underlie many facets of the legal system.


For Trivia Buffs and Serious Researchers
Thousands of College Instructors Ranked on Just About Everything

November 13, 2007 message from David Albrecht [albrecht@PROFALBRECHT.COM]

There is a popular teacher in my department. When this fellow teaches a section of a multi-section course, his section fills immediately and there is a waiting list. My department does not like an imbalance in class size, so they monitor enrollment in his section. No one is permitted to add his section until all other sections have at least one more students than his.

I'm concerned about student choice, about giving them a fair chance to get into his section instead of the current random timing of a spot opening up in his section.

Does anyone else have this situation at your school? How do you manage student sign-ups for a popular teacher? Any practical suggestions would be greatly appreciated.

David Albrecht
Bowling Green

November 14, 2007 reply from Bob Jensen

Hi David,

I think the first thing to study is what makes an instructor so popular. There can be good reasons (tremendous preparation, inspirational, caring, knowing each student) and bad reasons (easy grader, no need to attend class), and questionable without ipso facto being good or bad (entertaining, humorous).

The RateMyProfessor site now has some information on most college instructors in a number of nations --- http://www.ratemyprofessors.com/index.jsp  The overwhelming factor leading to popularity is grading since the number one concern in college revealed by students is grading. Of course there are many problems in this database and many instructors and administrators refuse to even look at these RateMyProfessor archives. Firstly, student reporting is self selective. The majority of students in any class do not submit evaluations. A fringe element (often outliers for and against) tends to provide most of the information. Since colleges do know the class sizes, it is possible to get an idea about "sample" size, although these are definitely not a random samples. It's a little like book and product reviews in Amazon.com.

There are both instructors who are not rated at all on RateMyProfessor and others who are too thinly rated (e.g., less than ten evaluations) to have their evaluations taken seriously. For example, one of my favorite enthusiastic teachers is the award-winning Amy Dunbar who teaches tax at the University of Connecticut. Currently there are 82 instructors in the RateMyProfessor archives who are named Dunbar. But not a single student evaluation has apparently been sent in by the fortunate students of Amy Dunbar. Another one of my favorites is Dennis Beresford at the University of Georgia. But he only has one (highly favorable) evaluation in the archives. I suspect that there's an added reporting bias. Both Amy and Denny mostly teach graduate students. I suspect that graduate students are less inclined to fool with RateMyProfessor.

Having said this, there can be revealing information about teaching style, grading, exam difficulties, and other things factoring into good and bad teaching. Probably the most popular thing I've noted is that the top-rated professors usually get responses about making the class "easy." Now that can be taken two ways. It's a good thing to make difficult material seem more easy but still grade on the basis of mastering the difficult material. It is quite another thing to leave out the hard parts so students really do not master the difficult parts of the course.

If nothing else, RateMyProfessor says a whole lot about the students we teach. The first thing to note is how these college-level students often spell worse than the high school drop outs. In English classes such bad grammar may be intentional, but I've read enough term papers over the years to know that dependence upon spell checkers in word processors has made students worse in spelling on messages that they do not have the computer check for spelling. They're definitely Fonex spellers.

Many students, certainly not all, tend to prefer easy graders. For example, currently the instructor ranked Number 1 in the United States by RateMyProfessor appears to be an easy grader, although comments by only a few individual students should be taken with a grain of salt. Here's Page One (five out of 92 evaluations) of 19 pages of summary evaluations at http://www.ratemyprofessors.com/ShowRatings.jsp?tid=23294

11/13/07 HIST101 5 5 5 5   easiest teacher EVER
11/12/07 abcdACCT 1 1 1 1   good professor
11/11/07 HistGacct 3 2 4 1   Good teacher. Was enjoyable to heat teach. Reccomend class. Made my softmore year.
11/10/07 HISTACCT 5 5 5 5   Very genious.
11/8/07 histSECT 3 5 4 4   amazing. by far the greatest teacher. I had him for Culture and the Holocust with Schiffman and Scott. He is a genius. love him.

Does it really improve ratings to not make students have presentations? Although making a course easy is popular, is it a good thing to do? Here are the Page 3 (five out of 55 evaluations) ratings of the instructor ranked Number 2 in the United States:

12/21/05 Spanish 10
2
3 5 5 5   One of the best professors that I have ever had. Homework is taken up on a daily base but, grading is not harsh. No presentations.
11/2/05 SPA 102 4 5 5 3   Wow, a great teacher. Totally does not call people out and make them feel stupid in class, like a lot of spanish teachers. The homework is super easy quiz grades that can be returned with corrections for extra points. You have to take her for Spa 102!!!! You actually learn in this class but is fun too!
10/27/05 Span 102 4 5 5 5   I love Senora Hanahan. She is one of the best teachers I ever had. She is very clear and she is super nice. She will go out of her way just to make sure that you understand. I Love Her! I advise everyone to take her if you have a choice. She is great!!
9/14/05 SPA 201 4 5 5 5   I am absolutly not suprised that Senora Hanahan has smiley faces on every rating. She is awesme and fun.
8/25/05 SPA 102 4 5 5 5 envelope I LOVE her! Absolutely wonderful! Goes far out of her way to help you and remembers your needs always. She will call you at home if you tell her you need help, and she will do everything possible to keep you on track . I have no IDEA how she does it! She really wants you to learn the language. She's pretty and fun and absolutely wonderful!

 

Students, however, are somewhat inconsistent about grading and exam difficulties. For example, read the summary outcomes for the instructor currently ranked as Number 8 in the United States --- http://www.ratemyprofessors.com/ShowRatings.jsp?tid=182825
Note this is only one page out of ten pages of comments:

10/31/07 hpd110 5 3 2 4   she is pushing religion on us too much... she should be more open minded. c-lots is always forcing her faith based lessons down our throats. she makes me wanna puke.
10/14/07 PysEd100 1 1 1 1   She is no good in my opinion.
5/22/07 HPD110 5 5 5 5   Dr. Lottes is amazing! it is almost impossible to get lower than an A in her class as long as you show up. her lectures are very interesting and sometimes it's almost like going to therapy. the tests and activities are easy and during the test there are group sections so it'll help your test grades. she is very outgoing and fun! so take her!
12/7/06 HDP070 2 5 5 2   Grades the class really hard, don't take if you are not already physically fit. Otherwise, she's an amazing teacher. You can tell she really cares about her students.

Read the rest of the comments at http://www.ratemyprofessors.com/ShowRatings.jsp?tid=182825

 

It's possible to look up individual colleges and I looked up Bowling Green State University which is your current home base David. There are currently 1,322 instructors rated at Bowling Green. I then searched by the Department of Accounting. There are currently ten instructors rated. The highest rated professor (in terms of average evaluations) has the following Page One evaluations:

4/9/07 mis200 4 5 5 1 i admit, i don't like the class (mis200) since i think it has nothing to do with my major. but mr. rohrs isn't that hard, and makes the class alright.
4/5/07 mis200 3 4 4 1 Other prof's assign less work for this class, but his assignments aren't difficult. Really nice guy, helpful if you ask, pretty picky though.
4/4/07 Acct102 2 5 5 2 Easy to understand, midwestern guy. Doesn't talk over your head.
12/14/06 mis200 4 5 5 2 Kind of a lot of work but if you do good on it you will def do good...real cool guy
12/10/06 BA150 4 5 5 4 Mr. Rohrs made BA 150 actually somewhat enjoyable. He is very helpful and makes class as interesting as possible. He is also very fair with grading. Highly Recommend.

 

Your evaluations make me want to take your classes David. However, only 36 students have submitted evaluations. My guess is that over the same years you've taught hundreds of students. But my guess is that we can extrapolate that you make dull old accounting interesting and entertaining to students.

In answer to your question about dealing with student assignments to multiple sections I have no answers. Many universities cycle the pre-registration according to accumulated credits earned.. Hence seniors sign up first and first year students get the leftovers. Standby signups are handled according to timing much like airlines dole out standby tickets.

It is probably a bad idea to let instructors themselves add students to the course. Popular teachers may be deluged with students seeking favors, and some instructors do not know how to say no even though they may be hurting other students by admitting too many students. Fortunately, classes are generally limited by the number of seats available. Distance education courses do not have that excuse for limiting class size.

 

PS
For research and sometimes entertainment, it's interesting to read the instructor feedback comments concerning their own evaluations of RateMyProfessor --- http://www.mtvu.com/professors_strike_back/

You can also enter the word "humor" into the top search box and investigate the broad range of humor and humorous styles of instructors.

Bob Jensen

Also see the following:

Bob Jensen's threads on the dysfunctional aspects of teacher evaluations on grade inflation --- http://www.trinity.edu/rjensen/Assess.htm#GradeInflation


Question
What topic dominates instructor evaluations on RateMyProfessors.com (or RATE for short)?

"RateMyProfessors — or His Shoes Are Dirty," by Terry Caesar, Inside Higher Ed, July 28, 2006 --- http://www.insidehighered.com/views/2006/07/28/caesar

But the trouble begins here. Like those guests, students turn out to be candid about the same thing. Rather than sex, it’s grades. Over and over again, RATE comments cut right to the chase: how easy does the professor grade? If easy, all things are forgiven, including a dull classroom presence. If hard, few things are forgiven, especially not a dull classroom presence. Of course we knew students are obsessed with grades. Yet until RATE could we have known how utterly, unremittingly, remorselessly?

And now the obsession is free to roam and cavort, without the constraints of the class-by-class student evaluation forms, with their desiderata about the course being “organized” or the instructor having “knowledge of subject matter.” These things still count. RATE students regularly register them. But nothing counts like grades. Compared to RATE, the familiar old student evaluation forms suddenly look like searching inquiries into the very nature of formal education, which consists of many other things than the evaluative dispositions of the professor teaching it.

What other things? For example, whether or not the course is required. Even the most rudimentary of student evaluation forms calls for this information. Not RATE. Much of the reason a student is free to go straight for the professorial jugular — and notwithstanding all the praise, the site is a splatfest — is because course content can be merrily cast aside. The raw, visceral encounter of student with professor, as mediated through the grade, emerges as virtually the sole item of interest.

Of course one could reply: so what? The site elicits nothing else. That’s why it’s called, “rate my professors,” and not “rate my course.” In effect, RATE takes advantage of the slippage always implicit in traditional student evaluations, which both are and are not evaluations of the professor rather than the course. To be precise, they are evaluations of the professor in terms of a particular course. This particularity, on the other hand, is precisely what is missing at the RATE site, where whether or not a professor is being judged by majors — a crucial factor for departmental and college-wide tenure or promotion committees who are processing an individual’s student evaluations — is not stipulated.

Granted, a student might bring up being a major. A student might bring anything up. This is why RATE disappoints, though, because there’s no framework, not even that of a specific course, to restrain or guide student comments. “Sarcastic” could well be a different thing in an upper-division than in a lower-division course. But in the personalistic RATE idiom, it’s always a character flaw. Indeed, the purest RATE comments are all about character. Just as the course is without content, the professor is without performative ability. Whether he’s a “nice guy” or she “plays favorites,” it’s as if the student has met the professor a few times at a party, rather than as a member of his or her class for a semester.

RATE comments are particularly striking if we compare those made by the professor’s colleagues as a result of classroom observations. Many departments have evolved extremely detailed checksheets. I have before me one that divides the observation into four categories, including Personal Characteristics (10 items), Interpersonal Relationships (8), Subject Application/Knowledge (8), and Conducting Instruction (36). Why so many in the last category? Because performance matters — which is just what we tell students about examinations: each aims to test not so much an individual’s knowledge as a particular performance of that knowledge.

Of course, some items on the checksheet are of dubious value, e.g. “uses a variety of cognitive levels when asking questions.” So it goes in the effort to itemize successful teaching, an attempt lauded by proponents of student evaluations or lamented by critics. The genius of RATE is to bypass the attempt entirely, most notoriously with its “Hotness Total.” Successful teaching? You may be able to improve “helpfulness” or “clarity.” But you can’t very well improve “hotness.” Whether or not you are a successful teacher is not safely distant at RATE from whether or not you are “hot.”

Perhaps it never was. In calling for a temperature check, RATE may merely be directly addressing a question — call it the charisma of an individual professor — that traditional student evaluations avoid. If so, though, they avoid it with good reason: charisma can’t be routinized. When it is, it becomes banal, which is one reason why the critical comments are far livelier than the celebratory ones. RATE winds up testifying to one truism about teaching: It’s a lot easier to say what good teaching isn’t than to say what it is. Why? One reason is, because it’s a lot easier for students who care only about teachers and not about teaching to say so.

Finally, what about these RATE students? How many semester hours have they completed? How many classes did they miss? It is with good reason (we discover) that traditional student evaluation forms are careful to ask something about each student. Not only is it important for the administrative processing of each form. Such questions, even at a minimal level, concede the significance in any evaluation of the evaluating subject. Without some attention to this, the person under consideration is reduced to the status of an object — which is, precisely, what the RATE professor becomes, time after time. Students on RATE provide no information at all about themselves, not even initials or geographical locations, as given by many of the people who rate books and movies on amazon.com or who give comments on columns and articles on this Web site.

In fact, students at RATE don’t even have to be students! I know of one professor who was so angered at a comment made by one of her students that she took out a fake account, wrote a more favorable comment about herself, and then added more praise to the comments about two of her colleagues. How many other professors do this? There’s no telling — just as there’s no telling about local uses of the site by campus committees. Of course this is ultimately the point about RATE: Even the student who writes in the most personal comments (e.g. “hates deodorant") is completely safe from local retribution — never mind accountability — because the medium is so completely anonymous.

Thus, the blunt energies of RATE emerge as cutting edge for higher education in the 21st century. In this respect, the degree of accuracy concerning any one individual comment about any one professor is beside the point. The point is instead the medium itself and the nature of the judgements it makes possible. Those on display at RATE are immediate because the virtual medium makes them possible, and anonymous because the same medium requires no identity markers for an individual. Moreover, the sheer aggregation of the site itself — including anybody from anywhere in the country — emerges as much more decisive than what can or cannot be said on it. I suppose this is equivalent to shrugging, whatever we think of RATE, we now have to live with it.

I think again of the very first student evaluation I received at a T.A. The result? I no longer remember. Probably not quite as bad as I feared, although certainly not as good as I hoped. The only thing I remember is one comment. It was made, I was pretty sure, by a student who sat right in the front row, often put her head down on the desk (the class was at 8 a.m.) and never said a word all semester. She wrote: “his shoes are dirty.” This shocked me. What about all the time I had spent, reading, preparing, correcting? What about how I tried to make available the best interpretations of the stories required? My attempts to keep discussions organized, or just to have discussions, rather than lectures?

All irrelevant, at least for one student? It seemed so. Worse, I had to admit the student was probably right — that old pair of brown wingtips I loved was visibly becoming frayed and I hadn’t kept them shined. Of course I could object: Should the state of a professor’s shoes really constitute a legitimate student concern? Come to this, can’t you be a successful teacher if your shoes are dirty? In today’s idiom, might this not even strike at least some students all by itself as being, well, “hot"? In any case, I’ve never forgotten this comment. Sometimes it represents to me the only thing I’ve ever learned from reading my student evaluations. I took it very personally once and I cherish it personally still.

Had it appeared on RATE, however, the comment would feel very different. A RATE[D] professor is likely to feel like a contestant on “American Idol,” standing there smiling while the results from the viewing audience are totaled. What do any of them learn? Nothing, except that everything from the peculiarities of their personalities to, ah, the shine of their shoes, counts. But of course as professors we knew this already. Didn’t we? Of course it might always be good to learn it all over again. But not at a site where nobody’s particular class has any weight; not in a medium in which everybody’s words float free; and not from students whose comments guarantee nothing except their own anonymity. I’ll bet some of them even wear dirty shoes.

July 28, 2006 reply from Alexander Robin A [alexande.robi@UWLAX.EDU]

Two quotes from a couple of Bob Jensen's recent posts:

"Of course we knew students are obsessed with grades." (from the RateMyProfessors thread)

"The problem is that universities have explicit or implicit rankings of "journal quality" that is largely dictated by research faculty in those universities. These rankings are crucial to promotion, tenure, and performance evaluation decisions." (from the TAR thread)

These two issues are related. First, students are obsessed with grades because universities, employers and just about everyone else involved are obsessed with grades. One can also say that faculty are obsessed with publications because so are those who decide their fates. In these two areas of academia, the measurement has become more important than the thing it was supposed to measure.

For the student, ideally the learning is the most important outcome of a class and the grade is supposed to reflect how successful the learning was. But the learning does not directly and tangibly affect the student - the grade does. In my teaching experience students, administrators and employers saw the grade as being the key outcome of a class, not the learning.

Research publication is supposed to result from a desire to communicate the results of research activity that the researcher is very interested in. But, especially in business schools, this has been turned on its head and the publication is most important and the research is secondary - it's just a means to the publication, which is necessary for tenure, etc.

It's really a pathetic situation in which the ideals of learning and discovery are largely perverted. Had I fully understood the magnitude of the problem, I would have never gone for a PhD or gotten into teaching. As to what to do about it, I really don't know. The problems are so deeply entrenched in academic culture. Finally I just gave up and retired early hoping to do something useful for the rest of my productive life.

Robin Alexander

Bob Jensen's threads on teaching evaluations are at http://www.trinity.edu/rjensen/assess.htm#TeachingStyle

Bob Jensen's threads on teaching evaluations and learning styles are at http://www.trinity.edu/rjensen/assess.htm#LearningStyles


Professor Socrates' Teaching Evaluations:  He's a Drag

"Hemlock Available in the Faculty Lounge advertisement Article tools," by Thomas Cushman, The Chronicle of Higher Education, March 16, 2007 --- http://chronicle.com/temp/reprint.php? id=6fnxs4gx7j6qr4v7qn567y5hb52ywb33

Teaching evaluations have become a permanent fixture in the academic environment. These instruments, through which students express their true feelings about classes and profes-sors, can make or break an instructor. What would students say if they had Socrates as a professor?

This class on philosophy was really good, Professor Socrates is sooooo smart, I want to be just like him when I graduate (except not so short). I was amazed at how he could take just about any argument and prove it wrong.

I would advise him, though, that he doesn't know everything, and one time he even said in class that the wise man is someone who knows that he knows little (Prof. Socrates, how about that sexist language!?). I don't think he even realizes at times that he contradicts himself. But I see that he is just eager to share his vast knowledge with us, so I really think it is more a sin of enthusiasm than anything else.

I liked most of the meetings, except when Thrasymachus came. He was completely arrogant, and I really resented his male rage and his point of view. I guess I kind of liked him, though, because he stood up to Prof. Socrates, but I think he is against peace and justice and has no place in the modern university.

Also, the course could use more women (hint: Prof. Socrates, maybe next time you could have your wife Xanthippe come in and we can ask questions about your home life! Does she resent the fact that you spend so much time with your students?). All in all, though, I highly recommend both the course and the instructor.

Socrates is a real drag, I don't know how in hell he ever got tenure. He makes students feel bad by criticizing them all the time. He pretends like he's teaching them, but he's really ramming his ideas down student's throtes. He's always taking over the conversation and hardly lets anyone get a word in.

He's sooo arrogant. One time in class this guy comes in with some real good perspectives and Socrates just kept shooting him down. Anything the guy said Socrates just thought he was better than him.

He always keeps talking about these figures in a cave, like they really have anything to do with the real world. Give me a break! I spend serious money for my education and I need something I can use in the real world, not some b.s. about shadows and imaginary trolls who live in caves.

He also talks a lot about things we haven't read for class and expects us to read all the readings on the syllabus even if we don't discuss them in class and that really bugs me. Students' only have so much time and I didn't pay him to torture me with all that extra crap.

If you want to get anxious and depressed, take his course. Otherwise, steer clear of him! (Oh yeah, his grading is really subjective, he doesn't give any formal exams or papers so its hard to know where you stand in the class and when you try to talk to him about grades he just gets all agitated and changes the topic.)

For someone who is always challenging conventional wisdom (if I heard that term one more time I was going to die), Professor Socrates' ideal republic is pretty darn static. I mean there is absolutely no room to move there in terms of intellectual development and social change.

Also, I was taking this course on queer theory and one of the central concepts was "phallocentricism" and I was actually glad to have taken Socrates because he is a living, breathing phallocentrist!

Also, I believe this Republic that Prof. Socrates wants to design — as if anyone really wants to let this dreadful little man design an entire city — is nothing but a plan for a hegemonic, masculinist empire that will dominate all of Greece and enforce its own values and beliefs on the diverse communities of our multicultural society.

I was warned about this man by my adviser in women's studies. I don't see that anything other than white male patriarchy can explain his omnipresence in the agora and it certainly is evident that he contributes nothing to a multicultural learning environment. In fact, his whole search for the Truth is evidence of his denial of the virtual infinitude of epistemic realities (that term wasn't from queer theory, but from French lit, but it was amazing to see how applicable it was to queer theory).

One thing in his defense is that he was much more positive toward gay and lesbian people. Actually, there was this one guy in class, Phaedroh or something like that, who Socrates was always looking at and one day they both didn't come to class and they disappeared for the whole day. I'm quite sure that something is going on there and that the professor is abusing his power over this student.

I learned a lot in this class, a lot of things I never knew before. From what I heard from other students, Professor Socrates is kind of weird, and at first I agreed with them, but then I figured out what he was up to. He showed us that the answers to some really important questions already are in our minds.

I really like how he says that he is not so much a teacher, but a facilitator. That works for me because I really dislike the way most professors just read their lectures and have us write them all down and just regurgitate them back on tests and papers. We need more professors like Professor Socrates who are willing to challenge students by presenting materials in new and exciting ways.

I actually came out of this class with more questions than answers, which bothered me and made me uncomfortable in the beginning, but Professor Socrates made me realize that that's what learning is all about. I think it is the only class I ever took which made me feel like a different person afterward. I would highly recommend this class to students who want to try a different way of learning.

I don't know why all the people are so pissed at Professor Socrates! They say he's corrupting us, but it's really them that are corrupt. I know some people resent his aggressive style, but that's part of the dialectic. Kudos to you, Professor Socrates, you've really changed my way of thinking! Socs rocks!!

My first thought about this class was: this guy is really ugly. Then I thought, well, he's just a little hard on the eyes. Finally, I came to see that he was kind of cute. Before I used to judge everyone based on first impressions, but I learned that their outward appearances can be seen in different ways through different lenses.

I learned a lot in this class, especially about justice. I always thought that justice was just punishing people for doing things against the law and stuff. I was really blown away by the idea that justice means doing people no harm (and thanks to Prof. Socrates, I now know that the people you think are your enemies might be your friends and vice versa, I applied that to the people in my dorm and he was absolutely right).

An excellent class over all. One thing I could suggest is that he take a little more care about his personal appearance, because as we all know, first impressions are lasting impressions.

Socrates is bias and prejudice and a racist and a sexist and a homophobe. He stole his ideas from the African people and won't even talk to them now. Someone said that maybe he was part African, but there is noooooo way.

Thomas Cushman is a professor of sociology at Wellesley College.


Grade inflation begins before students attend college

When are all the millions of A grades of applicants really A+ grades for the very top students?
In the cat-and-mouse maneuvering over admission to prestigious colleges and universities, thousands of high schools have simply stopped providing that information, concluding it could harm the chances of their very good, but not best, students. Canny college officials, in turn, have found a tactical way to respond. Using broad data that high schools often provide, like a distribution of grade averages for an entire senior class, they essentially recreate an applicant's class rank. The process has left them exasperated. "If we're looking at your son or daughter and you want us to know that they are among the best in their school, without a rank we don't necessarily know that," said Jim Bock, dean of admissions and financial aid at Swarthmore College.
Alan Finder, "Schools Avoid Class Ranking, Vexing Colleges," The New York Times, March 5, 2006 --- http://www.nytimes.com/2006/03/05/education/05rank.html


Why grades are worse predictors of academic success than standardized tests

Several weeks into his first year of teaching math at the High School of Arts and Technology in Manhattan, Austin Lampros received a copy of the school’s grading policy. He took particular note of the stipulation that a student who attended class even once during a semester, who did absolutely nothing else, was to be given 45 points on the 100-point scale, just 20 short of a passing mark.
Samuel G. Freedman, "A Teacher Grows Disillusioned After a ‘Fail’ Becomes a ‘Pass’," The New York Times, August 1, 2007 --- http://www.nytimes.com/2007/08/01/education/01education.html 

That student, Indira Fernandez, had missed dozens of class sessions and failed to turn in numerous homework assignments, according to Mr. Lampros’s meticulous records, which he provided to The New York Times. She had not even shown up to take the final exam. She did, however, attend the senior prom.

Through the intercession of Ms. Geiger, Miss Fernandez was permitted to retake the final after receiving two days of personal tutoring from another math teacher. Even though her score of 66 still left her with a failing grade for the course as a whole by Mr. Lampros’s calculations, Ms. Geiger gave the student a passing mark, which allowed her to graduate.

Continued in article

Grades are even worse than tests as predictors of success

"The Wrong Traditions in Admissions," by William E. Sedlacek, Inside Higher Ed, July 27, 2007 --- http://www.insidehighered.com/views/2007/07/27/sedlacek

Grades and test scores have worked well as the prime criteria to evaluate applicants for admission, haven’t they? No! You’ve probably heard people say that over and over again, and figured that if the admissions experts believe it, you shouldn’t question them. But that long held conventional wisdom just isn’t true. Whatever value tests and grades have had in the past has been severely diminished. There are many reasons for this conclusion, including greater diversity among applicants by race, gender, sexual orientation and other dimensions that interact with career interests. Predicting success with so much variety among applicants with grades and test scores asks too much of those previous stalwarts of selection. They were never intended to carry such a heavy expectation and they just can’t do the job anymore, even if they once did. Another reason is purely statistical. We have had about 100 years to figure out how to measure verbal and quantitative skills better but we just can’t do it.

Grades are even worse than tests as predictors of success. The major reason is grade inflation. Everyone is getting higher grades these days, including those in high school, college, graduate, and professional school. Students are bunching up at the top of the grade distribution and we can’t distinguish among them in selecting who would make the best student at the next level.

We need a fresh approach. It is not good enough to feel constrained by the limitations of our current ways of conceiving of tests and grades. Instead of asking; “How can we make the SAT and other such tests better?” or “How can we adjust grades to make them better predictors of success?” we need to ask; “What kinds of measures will meet our needs now and in the future?” We do not need to ignore our current tests and grades, we need to add some new measures that expand the potential we can derive from assessment.

We appear to have forgotten why tests were created in the first place. While they were always considered to be useful in evaluating candidates, they were also considered to be more equitable than using prior grades because of the variation in quality among high schools.

Test results should be useful to educators — whether involved in academics or student services — by providing the basis to help students learn better and to analyze their needs. As currently designed, tests do not accomplish these objectives. How many of you have ever heard a colleague say “I can better educate my students because I know their SAT scores”? We need some things from our tests that currently we are not getting. We need tests that are fair to all and provide a good assessment of the developmental and learning needs of students, while being useful in selecting outstanding applicants. Our current tests don’t do that.

The rallying cry of “all for one and one for all” is one that is used often in developing what are thought of as fair and equitable measures. Commonly, the interpretation of how to handle diversity is to hone and fine-tune tests so they are work equally well for everyone (or at least to try to do that). However, if different groups have different experiences and varied ways of presenting their attributes and abilities, it is unlikely that one could develop a single measure, scale, test item etc. that could yield equally valid scores for all. If we concentrate on results rather than intentions, we could conclude that it is important to do an equally good job of selection for each group, not that we need to use the same measures for all to accomplish that goal. Equality of results, not process is most important.

Therefore, we should seek to retain the variance due to culture, race, gender, and other aspects of non-traditionality that may exist across diverse groups in our measures, rather than attempt to eliminate it. I define non-traditional persons as those with cultural experiences different from those of white middle-class males of European descent; those with less power to control their lives; and those who experience discrimination in the United States.

While the term “noncognitive” appears to be precise and “scientific” sounding, it has been used to describe a wide variety of attributes. Mostly it has been defined as something other than grades and test scores, including activities, school honors, personal statements, student involvement etc. In many cases those espousing noncognitive variables have confused a method (e.g. letters of recommendation) with what variable is being measured. One can look for many different things in a letter. Robert Sternberg’s system of viewing intelligence provides a model, but is important to know what sorts of abilities are being assessed and that those attributes are not just proxies for verbal and quantitative test scores. Noncognitive variables appear to be in Sternberg’s experiential and contextual domains, while standardized tests tend to reflect the componential domain. Noncognitive variables are useful for all students, they are particularly critical for non-traditional students, since standardized tests and prior grades may provide only a limited view of their potential.

I and my colleagues and students have developed a system of noncognitive variables that has worked well in many situations. The eight variables in the system are self-concept, realistic self-appraisal, handling the system (racism), long range goals, strong support person, community, leadership, and nontraditional knowledge. Measures of these dimensions are available at no cost in a variety of articles and in a book, Beyond the Big Test.

This Web site has previously featured how Oregon State University has used a version of this system very successfully in increasing their diversity and student success. Aside from increased retention of students, better referrals for student services have been experienced at Oregon State. The system has also been employed in selecting Gates Millennium Scholars. This program, funded by the Bill & Melinda Gates Foundation, provides full scholarships to undergraduate and graduate students of color from low-income families. The SAT scores of those not selected for scholarships were somewhat higher than those selected. To date this program has provided scholarships to more than 10,000 students attending more than 1,300 different colleges and universities. Their college GPAs are about 3.25, with five year retention rates of 87.5 percent and five year graduation rates of 77.5 percent, while attending some of the most selective colleges in the country. About two thirds are majoring in science and engineering.

The Washington State Achievers program has also employed the noncognitive variable system discussed above in identifying students from certain high schools that have received assistance from an intensive school reform program also funded by the Bill & Melinda Gates Foundation. More than 40 percent of the students in this program are white, and overall the students in the program are enrolling in colleges and universities in the state and are doing well. The program provides high school and college mentors for students. The College Success Foundation is introducing a similar program in Washington, D.C., using the noncognitive variables my colleagues and I have developed.

Recent articles in this publication have discussed programs at the Educational Testing Service for graduate students and Tufts University for undergraduates that have incorporated noncognitive variables. While I applaud the efforts for reasons I have discussed here, there are questions I would ask of each program. What variables are you assessing in the program? Do the variables reflect diversity conceptually? What evidence do you have that the variables assessed correlate with student success? Are the evaluators of the applications trained to understand how individuals from varied backgrounds may present their attributes differently? Have the programs used the research available on noncognitive variables in developing their systems? How well are the individuals selected doing in school compared to those rejected or those selected using another system? What are the costs to the applicants? If there are increased costs to applicants, why are they not covered by ETS or Tufts?

Until these and related questions are answered these two programs seem like interesting ideas worth watching. In the meantime we can learn from the programs described above that have been successful in employing noncognitive variables. It is important for educators to resist half measures and to confront fully the many flaws of the traditional ways higher education has evaluated applicants.

William E. Sedlacek is professor emeritus at the University of Maryland at College Park. His latest book is Beyond the Big Test: Noncognitive Assessment in Higher Education

CUNY to Raise SAT Requirements for Admission
The City University of New York is beginning a drive to raise admissions requirements at its senior colleges, its first broad revision since its trustees voted to bar students needing remedial instruction from its bachelor’s degree programs nine years ago. In 2008, freshmen will have to show math SAT scores 20 to 30 points higher than they do now to enter the university’s top-tier colleges — Baruch, Brooklyn, City, Hunter and Queens — and its six other senior colleges.
Karen W. Arenson, "CUNY Plans to Raise Its Admissions Standards," The New York Times, July 28, 2007 --- http://www.nytimes.com/2007/07/28/education/28cuny.html

Grades are even worse than tests as predictors of success

"The Wrong Traditions in Admissions," by William E. Sedlacek, Inside Higher Ed, July 27, 2007 --- http://www.insidehighered.com/views/2007/07/27/sedlacek

Grades and test scores have worked well as the prime criteria to evaluate applicants for admission, haven’t they? No! You’ve probably heard people say that over and over again, and figured that if the admissions experts believe it, you shouldn’t question them. But that long held conventional wisdom just isn’t true. Whatever value tests and grades have had in the past has been severely diminished. There are many reasons for this conclusion, including greater diversity among applicants by race, gender, sexual orientation and other dimensions that interact with career interests. Predicting success with so much variety among applicants with grades and test scores asks too much of those previous stalwarts of selection. They were never intended to carry such a heavy expectation and they just can’t do the job anymore, even if they once did. Another reason is purely statistical. We have had about 100 years to figure out how to measure verbal and quantitative skills better but we just can’t do it.

Grades are even worse than tests as predictors of success. The major reason is grade inflation. Everyone is getting higher grades these days, including those in high school, college, graduate, and professional school. Students are bunching up at the top of the grade distribution and we can’t distinguish among them in selecting who would make the best student at the next level.

We need a fresh approach. It is not good enough to feel constrained by the limitations of our current ways of conceiving of tests and grades. Instead of asking; “How can we make the SAT and other such tests better?” or “How can we adjust grades to make them better predictors of success?” we need to ask; “What kinds of measures will meet our needs now and in the future?” We do not need to ignore our current tests and grades, we need to add some new measures that expand the potential we can derive from assessment.

We appear to have forgotten why tests were created in the first place. While they were always considered to be useful in evaluating candidates, they were also considered to be more equitable than using prior grades because of the variation in quality among high schools.

Test results should be useful to educators — whether involved in academics or student services — by providing the basis to help students learn better and to analyze their needs. As currently designed, tests do not accomplish these objectives. How many of you have ever heard a colleague say “I can better educate my students because I know their SAT scores”? We need some things from our tests that currently we are not getting. We need tests that are fair to all and provide a good assessment of the developmental and learning needs of students, while being useful in selecting outstanding applicants. Our current tests don’t do that.

The rallying cry of “all for one and one for all” is one that is used often in developing what are thought of as fair and equitable measures. Commonly, the interpretation of how to handle diversity is to hone and fine-tune tests so they are work equally well for everyone (or at least to try to do that). However, if different groups have different experiences and varied ways of presenting their attributes and abilities, it is unlikely that one could develop a single measure, scale, test item etc. that could yield equally valid scores for all. If we concentrate on results rather than intentions, we could conclude that it is important to do an equally good job of selection for each group, not that we need to use the same measures for all to accomplish that goal. Equality of results, not process is most important.

Therefore, we should seek to retain the variance due to culture, race, gender, and other aspects of non-traditionality that may exist across diverse groups in our measures, rather than attempt to eliminate it. I define non-traditional persons as those with cultural experiences different from those of white middle-class males of European descent; those with less power to control their lives; and those who experience discrimination in the United States.

While the term “noncognitive” appears to be precise and “scientific” sounding, it has been used to describe a wide variety of attributes. Mostly it has been defined as something other than grades and test scores, including activities, school honors, personal statements, student involvement etc. In many cases those espousing noncognitive variables have confused a method (e.g. letters of recommendation) with what variable is being measured. One can look for many different things in a letter. Robert Sternberg’s system of viewing intelligence provides a model, but is important to know what sorts of abilities are being assessed and that those attributes are not just proxies for verbal and quantitative test scores. Noncognitive variables appear to be in Sternberg’s experiential and contextual domains, while standardized tests tend to reflect the componential domain. Noncognitive variables are useful for all students, they are particularly critical for non-traditional students, since standardized tests and prior grades may provide only a limited view of their potential.

I and my colleagues and students have developed a system of noncognitive variables that has worked well in many situations. The eight variables in the system are self-concept, realistic self-appraisal, handling the system (racism), long range goals, strong support person, community, leadership, and nontraditional knowledge. Measures of these dimensions are available at no cost in a variety of articles and in a book, Beyond the Big Test.

This Web site has previously featured how Oregon State University has used a version of this system very successfully in increasing their diversity and student success. Aside from increased retention of students, better referrals for student services have been experienced at Oregon State. The system has also been employed in selecting Gates Millennium Scholars. This program, funded by the Bill & Melinda Gates Foundation, provides full scholarships to undergraduate and graduate students of color from low-income families. The SAT scores of those not selected for scholarships were somewhat higher than those selected. To date this program has provided scholarships to more than 10,000 students attending more than 1,300 different colleges and universities. Their college GPAs are about 3.25, with five year retention rates of 87.5 percent and five year graduation rates of 77.5 percent, while attending some of the most selective colleges in the country. About two thirds are majoring in science and engineering.

The Washington State Achievers program has also employed the noncognitive variable system discussed above in identifying students from certain high schools that have received assistance from an intensive school reform program also funded by the Bill & Melinda Gates Foundation. More than 40 percent of the students in this program are white, and overall the students in the program are enrolling in colleges and universities in the state and are doing well. The program provides high school and college mentors for students. The College Success Foundation is introducing a similar program in Washington, D.C., using the noncognitive variables my colleagues and I have developed.

Recent articles in this publication have discussed programs at the Educational Testing Service for graduate students and Tufts University for undergraduates that have incorporated noncognitive variables. While I applaud the efforts for reasons I have discussed here, there are questions I would ask of each program. What variables are you assessing in the program? Do the variables reflect diversity conceptually? What evidence do you have that the variables assessed correlate with student success? Are the evaluators of the applications trained to understand how individuals from varied backgrounds may present their attributes differently? Have the programs used the research available on noncognitive variables in developing their systems? How well are the individuals selected doing in school compared to those rejected or those selected using another system? What are the costs to the applicants? If there are increased costs to applicants, why are they not covered by ETS or Tufts?

Until these and related questions are answered these two programs seem like interesting ideas worth watching. In the meantime we can learn from the programs described above that have been successful in employing noncognitive variables. It is important for educators to resist half measures and to confront fully the many flaws of the traditional ways higher education has evaluated applicants.

William E. Sedlacek is professor emeritus at the University of Maryland at College Park. His latest book is Beyond the Big Test: Noncognitive Assessment in Higher Education

Why grades are worse predictors of academic success than standardized tests

Several weeks into his first year of teaching math at the High School of Arts and Technology in Manhattan, Austin Lampros received a copy of the school’s grading policy. He took particular note of the stipulation that a student who attended class even once during a semester, who did absolutely nothing else, was to be given 45 points on the 100-point scale, just 20 short of a passing mark.
Samuel G. Freedman, "A Teacher Grows Disillusioned After a ‘Fail’ Becomes a ‘Pass’," The New York Times, August 1, 2007 --- http://www.nytimes.com/2007/08/01/education/01education.html 

That student, Indira Fernandez, had missed dozens of class sessions and failed to turn in numerous homework assignments, according to Mr. Lampros’s meticulous records, which he provided to The New York Times. She had not even shown up to take the final exam. She did, however, attend the senior prom.

Through the intercession of Ms. Geiger, Miss Fernandez was permitted to retake the final after receiving two days of personal tutoring from another math teacher. Even though her score of 66 still left her with a failing grade for the course as a whole by Mr. Lampros’s calculations, Ms. Geiger gave the student a passing mark, which allowed her to graduate.

Continued in article

CUNY to Raise SAT Requirements for Admission
The City University of New York is beginning a drive to raise admissions requirements at its senior colleges, its first broad revision since its trustees voted to bar students needing remedial instruction from its bachelor’s degree programs nine years ago. In 2008, freshmen will have to show math SAT scores 20 to 30 points higher than they do now to enter the university’s top-tier colleges — Baruch, Brooklyn, City, Hunter and Queens — and its six other senior colleges.
Karen W. Arenson, "CUNY Plans to Raise Its Admissions Standards," The New York Times, July 28, 2007 --- http://www.nytimes.com/2007/07/28/education/28cuny.html


Note the Stress on Grades (Point 4 Below)

"Playbook: Does Your School Make The Grade? Here are four things to consider when applying to an undergrad business program" by Louis Lavelle, with Geoff Gloeckler and Jane Porter, Business Week, March 19, 2007 ---
Click Here

COMPETITION IS FIERCE
1.
Once considered a haven for less academically gifted students, undergraduate business programs are raising their standards. With more students beating a path to their doors, many B-schools are boosting their admissions criteria and getting fussier.

At schools with four-year programs, sat and act requirements have gone up. The average sat score for freshmen admitted to the Indiana University business program, where applications nearly doubled last year, is now 1340—up from 1312 in 2005-2006 and a full 343 points higher than the national average for test takers who intend to major in business. At universities with two-year business programs, especially those like the University of Iowa where more than 2,000 declared business majors are waiting to join a program designed for 1,300, gpa requirements in pre-business courses are rising, too.

For students, the higher bar requires a strategic rethink. Many already take standardized tests multiple times to maximize scores. Those with lower scores who are applying directly to four-year business programs are beefing up their applications in other ways, including taking part in extracurricular activities and fund-raisers. Savvy applicants assess the likelihood of being accepted at their first-choice schools and give more thought to less selective "safety" schools.

Those applying to a four-year school with a two-year business program are advised to contemplate what they'll do if they can't find places as juniors. Can credits accumulated in the first two years be transferred to another school? Can one stay put, declare another major, and obtain a minor in business instead?

IT'S A NATIONAL GAME
2.
Undergraduate business education used to be a local or regional affair. That's changing. Today, many students attend programs far from home.

Out-of-state schools may provide a broader array of programs than those available in an applicant's home state. They include leadership, entrepreneurship, and global business. A number of schools have launched specialized programs that place students in hard-to-crack industries that are located in the school's backyard—such as sports marketing at the University of Oregon, home state of Nike (NKE ) and Adidas, among others; energy commerce at Texas Tech University; life sciences at Wharton; and both cinematic arts and computer engineering at the University of Southern California.

If the academic offerings aren't enough to get the intellectual juices flowing, consider this: Out-of-state tuition at top public universities can be a bargain. Attending a top private B-school like Wharton can easily cost more than $30,000 a year, excluding room and board and other living expenses. A highly ranked public school like the No. 2 University of Virginia costs $25,945; No. 13 University of Texas at Austin is $22,580; and No. 15 University of North Carolina, $18,010.

Many of the public schools have programs that are roughly on par with private institutions—in terms of class size, faculty-student ratios, and other measures. Public schools can also be easier to get into. The average sat score at Wharton is 1430—compared with 1366 for Virginia, 1335 at unc, and 1275 for Texas-Austin.

Sometimes out-of-state schools, public or private, are better at finding grads decent jobs. If a school has established recruiting relationships with specific industries, it may be worth a look—no matter where it is. Are you an aspiring accountant? All of the Big Four firms recruit at Texas-Austin. Aiming for Wall Street? Recruiters for eight financial-services giants are among the 10 top recruiters at New York University. For a would-be "master of the universe" living in Oklahoma who is considering the University of Oklahoma—where no big investment banks recruit—the message is clear: change career goals, or start packing.

INTERNSHIPS MATTER
3.
Internships are a valuable learning experience. Since many employers use them as extended tryouts for full-time positions, they are also an important pipeline to the most coveted jobs. So scoring one ought to be near the top of every undergrad's agenda. Yet not all programs provide the same access to internships. At No. 5 University of Michigan, 92% of undergrads who completed our survey had internships, compared with less than 25% at No. 81 University of Texas at Dallas. And not all internships are created equal. Co-op programs at the University of Cincinnati, Northeastern University, and Penn State allow students to graduate with up to two years of work experience. Elsewhere, a three-month summer internship is the norm.

Why the disparity? For one thing, location matters. To a casual observer there wouldn't appear to be much to differentiate the undergraduate B-school program at Fordham University from that of the University of Denver. Both are private, four-year programs. Tuition and enrollment are almost identical. And in last year's ranking they came in at No. 48 and No. 49, respectively. But at Denver, 57 companies recruited undergrads for internships. At New York-based Fordham: 200. Emily Sheu transferred from No. 4 Emory University to No. 34 (this year) Fordham, where she had internships at Bloomberg and Merrill Lynch & Co. (MER ) For her, it was all about location. "Atlanta," she points out, "is no Manhattan."

Students at three- and four-year programs are more likely to take in-depth business courses early, making them more competitive internship candidates. That's one reason why the University of Michigan is phasing out its two-year program in favor of a three-year model. Also, watch out for summer school. When schools schedule classes in the summer before the junior year, having more than one internship before graduation becomes near-impossible.

BEWARE THE GRADING CURVE
4.
Are grades really such a big deal? The answer is a resounding "yes," especially for those considering schools like Michigan, Babson College, Oregon, or Pennsylvania, where grading curves are a fact of business school life. Curves designed to counter grade inflation by limiting the number of As in any given class can make it difficult for even high performers to land interviews with some recruiters.

USC's Marshall School of Business grades students on a curve, with professors expected to hold the average gpa to 3.0 in core courses and 3.3 in electives. Most students will get a 3.0, or a B, in each of their 10 core business courses. A handful will earn a slightly higher grade, and the same number will earn a lower grade.

For recruiters trolling B-school campuses, a gpa of under 3.5 will in many cases consign a résumé to the bottom of the stack. At Marshall, most large employers take the grade structure into consideration, so students are rarely passed over for interviews. But for smaller companies not familiar with the school, students are at a disadvantage. David Freeman, a recent Marshall grad, estimates that he missed out on a dozen interviews because he didn't meet the grade requirements companies were looking for. "Without the curve, my gpa would have been high enough to qualify for these interviews," he says.

While a grading curve probably isn't a deal-breaker for students choosing among a handful of schools, it's certainly something that should be taken into consideration. It's worth asking, for example, if the policy is school-wide or if individual professors make their own rules, and whether the curve covers core courses, electives, or both.

Some students say that curves cause morale problems among students, intensifying competition and making it harder to form meaningful teams. Before enrolling in a program, prospective students should find out what, if anything, the school is doing to counter those problems.

Bob Jensen's threads on higher education controversies are at http://www.trinity.edu/rjensen/HigherEdControversies.htm


Grade inflation carries on after they get to college

Question
What was the average grade at Harvard in 1940?

Answer
In 1940, Harvard students had an unbelievable number of grades below a C grade.

http://www.thecrimson.com/fmarchives/fm_03_01_2001/article4A.html 

In 1940, more Harvard students had an average grade of C- than any other GPA.

By 1986, that C- had ballooned to a B+. Today more students receive As and Bs than ever before. And that’s about as far as the consensus on grade inflation goes. Harry R. Lewis ‘68, dean of the College, doesn’t even use the word without distancing himself from its connotations. “I think that by far the dominant cause of grade ‘inflation’ at Harvard,” Lewis writes in an e-mail message, “is the application of constant grading standards to the work of ever more talented students.”

Continued in article

The average grade in leading private universities in 1992 was 3.11.  
In 2002 it jumped to 3.26 on a four point scale.

Average undergraduate GPA for Alabama, California-Irvine, Carleton, Duke, Florida, Georgia Tech, Hampden-Sydney, Harvard, Harvey Mudd, Nebraska-Kearney, North Carolina-Chapel Hill, North Carolina-Greensboro, Northern Michigan, Pomona, Princeton, Purdue, Texas, University of Washington, Utah, Wheaton (Illinois), Winthrop, and Wisconsin-La Crosse. Note that inclusion in the average does not imply that an institution has significant inflation. Data on GPAs for each institution can be found at the bottom of this web page. Institutions comprising this average were chosen strictly because they have either published their data or have sent their data to the author on GPA trends over the last 11 years.
GradeInflation.com --- http://gradeinflation.com/ 

Grade inflation is emerging as the new leading scandal of higher education.
"The great grade-inflation lie Critics say that cushy grading is producing ignorant college students and a bankrupt education system," by Tom Scocca, The Boston Phoenix April 23 - 30, 1998 ---
http://www.bostonphoenix.com/archive/features/98/04/23/GRADE_INFLATION.html 


October 18, 2005 message from Tracey Sutherland [tracey@AAAHQ.ORG]

Re new faculty, teaching assistants, and teaching support -- there is an interesting body of literature developing sparked by a project begun in the early 1990's that's known as the Preparing Future Faculty initiative -- funded by the Pew Trusts, NSF, and others in conjunction with the Council of Graduate Schools and AAC&U -- more at http://www.preparing-faculty.org/  .

Our thread also seems to be spinning around the relationships between effort/grades and student course evaluations -- those with that interest may find a "Pop Quiz" on assumptions about student course ratings interesting: http://www.ntlf.com/html/pi/9712/rwatch_1.htm  . Lower on the page at that link is also a brief summary from Braskamp and Ory's "Assessing Faculty Work" a well-regarded book including meta-analysis of research on student ratings that includes:

"Factors that are significantly and positively associated with student ratings include the following: measures of student achievement; alumni, peer and administrative ratings; qualitative student comments; workload/difficulty level [ More difficult courses, with a greater workload, receive slightly higher student evaluations than do easier/lower workload courses]; energy and enthusiasm of the teacher; status as a regular faculty member (as opposed to a graduate assistant); faculty research productivity; student motivation; student expected grade; and course level. The size and practical significance of these relationships vary. For example, most agree that there is little practical significance to the small positive correlation between expected grade and student ratings, and between faculty research productivity and student ratings. Similarly, research shows a small and negative, but practically insignificant, relationship between class size and student ratings."

"Factors generally found to be unrelated to student ratings include faculty age and teaching experience, instructor's gender, most faculty personality traits, student's age, class level of student, student's GPA, student's personality, and student's gender (with the exception of a slight preference for same-sex instructors)."

Just grist for the mill!

Tracey

October 18, 2005 reply from Bob Jensen

One of the problems with studying correlations between teaching evaluations and grades is that the data are corrupted by grade inflation prior to the collection of the data.  Research studies have shown that, when teaching evaluations started to become disclosed for performance and tenure evaluation decisions, grade inflation commenced.  Some of these studies, such as those at Duke, Rutgers, and Montana are summarized below.

Thus it may be difficult to conclude that grading and teaching evaluations are not really correlated if the grade inflation took place before the data were collected. 

Over the years I’ve seen teaching evaluations of many faculty.  One thing that I noticed about grading and teaching evaluations is that students will hammer on instructors who they think has an “unfair” grading policy.  Unfairness can be defined in terms of teachers having “pets” and/or to “ambiguity” over what it takes to get an A grade. 

Ambiguity is a real problem!  Many faculty think that ambiguity is important when educating students about real world complexities.  If course content (e.g., cases and essay assignments) are ambiguous, it becomes more difficult to avoid ambiguity in the grading process.  I think some of the most serious grade inflation took place in courses where instructors wanted to leave ambiguity in course content and not get hammered on teaching evaluations due to student frustrations over grades.  This is especially common in graduate schools where virtually all grades are either A or B grades and a C is tantamount to an F.

Bob Jensen



Grade Inflation from High School to Graduate School
The Boston Globe reports seeing 30- 40 valedictorians per class

Extra credit for AP courses, parental lobbying and genuine hard work by the most competitive students have combined to shatter any semblance of a Bell curve

 

An increasing number of Canada's business schools are literally selling MBAs to generate revenue

 

[some] professors who say their colleagues are so afraid of bad student evaluations that they are placating students with A's and B's.

 

From Jim Mahar's blog on November 24, 2006 --- http://financeprofessorblog.blogspot.com/

 

Grade inflation from HS to Grad school

Three related stories that are not strictly speaking finance but that should be of interest to most in academia.

In the first article, which is from the
Ottawa Citizen, accelerated and executive MBA programs come under attack for their supposed detrimantal impact on learning in favor of revenue.

MBAs dumbed down for profit:
"An increasing number of Canada's business schools are literally selling MBAs to generate revenue for their ravenous budgets, according to veteran Concordia University finance professor Alan Hochstein.

That apparent trend to make master of business administration degrees easier to achieve at a premium cost is leading to 'sub-standard education for enormous fees,' the self-proclaimed whistleblower said yesterday"
The second article is a widely reported AP article that that centers on High School grade inflation. This high school issue not only makes the admissions process more difficult but it also influences the behavior of the students ("complaining works") and their their grade expectations ("I have always gotten A's and therefore I deserve on here").

A few look-ins from
Boston Globe's version:
"Extra credit for AP courses, parental lobbying and genuine hard work by the most competitive students have combined to shatter any semblance of a Bell curve, one in which 'A's are reserved only for the very best. For example, of the 47,317 applications the University of California, Los Angeles, received for this fall's freshman class, nearly 21,000 had GPAs of 4.0 or above."
or consider this:
""We're seeing 30, 40 valedictorians at a high school because they don't want to create these distinctions between students...."
and
"The average high school GPA increased from 2.68 to 2.94 between 1990 and 2000, according to a federal study."
This is not just a High School problem. In part because of an agency cost problem (professors have incentives to grade leniently even if it is to the detriment of students), the same issues are regular discussions topics at all colleges as well. For instance consider this story from the Denver Post.
"A proposal to disclose class rank on student transcripts has ignited a debate among University of Colorado professors with starkly different views on whether grade inflation is a problem....

[some] professors who say their colleagues are so afraid of bad student evaluations that they are placating students with A's and B's.

The few professors who grade honestly end up with dismal scores on student evaluations, which affect their salaries, professor Paul Levitt said. There is also the "endless parade of malcontents" in their offices."

I would love to wrap this up with my own solution, but obviously it is a tough problem to which there are no easy solutions. That said, maybe it is time that I personally look back at my past years' class grades to make sure I am not getting too soft. If we all did that, we'd at least make a dent in the problem.

 

"Admissions boards face 'grade inflation'," by Justin Pope, Boston Globe, November 18, 2006 --- Click Here

That means he will have to find other ways to stand out.

"It's extremely difficult," he said. "I spent all summer writing my essay. We even hired a private tutor to make sure that essay was the best it can be. But even with that, it's like I'm just kind of leveling the playing field." Last year, he even considered transferring out of his highly competitive public school, to some place where his grades would look better.

Some call the phenomenon that Zalasky's fighting "grade inflation" -- implying the boost is undeserved. Others say students are truly earning their better marks. Regardless, it's a trend that's been building for years and may only be accelerating: Many students are getting very good grades. So many, in fact, it is getting harder and harder for colleges to use grades as a measuring stick for applicants.

Extra credit for AP courses, parental lobbying and genuine hard work by the most competitive students have combined to shatter any semblance of a Bell curve, one in which 'A's are reserved only for the very best. For example, of the 47,317 applications the University of California, Los Angeles, received for this fall's freshman class, nearly 21,000 had GPAs of 4.0 or above.

That's also making it harder for the most selective colleges -- who often call grades the single most important factor in admissions -- to join in a growing movement to lessen the influence of standardized tests.

"We're seeing 30, 40 valedictorians at a high school because they don't want to create these distinctions between students," said Jess Lord, dean of admission and financial aid at Haverford College in Pennsylvania. "If we don't have enough information, there's a chance we'll become more heavily reliant on test scores, and that's a real negative to me."

Standardized tests have endured a heap of bad publicity lately, with the SAT raising anger about its expanded length and recent scoring problems. A number of schools have stopped requiring tests scores, to much fanfare.

Continued in article

 

"Regents evaluate grade inflation:  Class Ranking Debated," by Jennifer Brown, Denver Post, November 2, 2006 --- http://www.denverpost.com/headlines/ci_4588002

 

A proposal to disclose class rank on student transcripts has ignited a debate among University of Colorado professors with starkly different views on whether grade inflation is a problem.

On one side are faculty who attribute the climbing grade-point averages at CU to the improved qualifications of entering students in the past dozen years.

And on the other are professors who say their colleagues are so afraid of bad student evaluations that they are placating students with A's and B's.

One Boulder English professor said departments should eliminate raises for faculty if the GPAs within the department rise above a designated level.

The few professors who grade honestly end up with dismal scores on student evaluations, which affect their salaries, professor Paul Levitt said. There is also the "endless parade of malcontents" in their offices.

"You have to be a masochist to proceed in that way," said Levitt, one of 10 professors and business leaders who spoke to CU regents about grade inflation Wednesday.

CU president Hank Brown suggested in August that the university take on grade inflation by putting class rank or grade-point-average percentiles on student transcripts.

Changing the transcripts would give potential employers and graduate schools a clearer picture of student achievement, Brown said.

At the Boulder campus, the average GPA rose from 2.87 in 1993 to 2.99 in 2004.

Regents are not likely to vote on the issue for a couple of months.

Regent Tom Lucero wants to go beyond Brown's suggestion and model CU's policy after Princeton University, where administrators instituted a limit on A's two years ago.

"As long as we do something to address this issue, I'll be happy nonetheless," he said.

But many professors believe academic rigor is a faculty issue and regents should stay out of it.

"Top-down initiatives ... will likely breed not higher expectations but a growing sense of cynicism," said a report from the Boulder Faculty Assembly, which opposes Brown's proposals.

Still, the group wrote that even though grade inflation has been "modest," the issue of academic rigor "deserves serious ongoing scrutiny."

"More important than the consideration of grades is the quality of education our students receive," said Boulder communication professor Jerry Hauser.

CU graduates are getting jobs at top firms, landing spots in elite graduate schools and having no trouble passing bar or licensing exams, he said.

But faculty who believe grade inflation is a serious problem said they welcome regent input.


Ignorant of Their Ignorance
My undergraduate students can’t accurately predict their academic performance or skill levels. Earlier in the semester, a writing assignment on study styles revealed that 14 percent of my undergraduate English composition students considered themselves “overachievers.” Not one of those students was receiving an A in my course by midterm. Fifty percent were receiving a C, another third was receiving B’s and the remainder had earned failing grades by midterm. One student wrote, “overachievers like myself began a long time ago.” She received a 70 percent on her first paper and a low C at midterm
.
Shari Wilson, "Ignorant of Their Ignorance," Inside Higher Ed, November 16, 2006 --- http://www.insidehighered.com/views/2006/11/16/wilson
Jensen comment
This does not bode well for self assessment.


What not to say to your professor/instructor
Top Ten No Sympathy Lines (Plus a Few Extra) --- http://www.uwgb.edu/dutchs/nosymp.htm

Here are some samples:

Think of it as a TOP TEN list with a few bonus items:
  1. This Course Covered Too Much Material...
  2. The Expected Grade Just for Coming to Class is a B
  3. I Disagreed With the Professor's Stand on ----
  4. Some Topics in Class Weren't on the Exams
  5. Do You Give Out a Study Guide?
  6. I Studied for Hours
  7. I Know The Material - I Just Don't Do Well on Exams
  8. I Don't Have Time For All This (...but you don't understand - I have a job.)
  9. Students Are Customers
  10. Do I Need to Know This?
  11. There Was Too Much Memorization
  12. This Course Wasn't Relevant
  13. Exams Don't Reflect Real Life
  14. I Paid Good Money for This Course and I Deserve a Good Grade
  15. All I Want Is The Diploma

RateMyProfessors has some real-world examples of comments that professors hated even worse --- http://www.ratemyprofessors.com/Funniest.jsp

A few samples are shown below:


Blackboard Will Soon Do Online Course Evaluations:
Should They Be Shared With the Administrators and/or the Public?

"Digital Assessments," by David Epstein, Inside Higher Ed, June 20, 2006 --- http://www.insidehighered.com/news/2006/06/20/blackboard

Assessment is quickly becoming the new black. It’s one of the themes of the Secretary of Education’s Commission on the Future of Higher Education. More and more institutions, some prodded by accreditors, are looking for rigorous ways — often online — to compile course data.

Now Blackboard, a leading provider of course management software, is making plans to enter the assessment field.

Blackboard already offers the capability to do course evaluations, and for over a year-and-a-half the company has been researching more comprehensive assessment practices.

The prospect of online evaluations and assessments, for many faculty members, conjures images of RateMyProfessors.com, the unrestricted free-for-all where over 700,000 professors are rated — often to their dismay — by anonymous reviewers. Blackboard — and some others are looking to enter the evaluation field — are planning very different and more educationally oriented models. Blackboard’s approach is more oriented on evaluating the course than the professor.

Blackboard has generally enjoyed a good reputation among faculty members, dating to its beginnings as a small startup. One of the things that has endeared Blackboard to academics is the ability they have had to customize the company’s products, and Blackboard, though it’s no longer small, will seek to keep important controls in the hands of institutions.

With institutions looking to do evaluations and assessment online, Debra Humphreys, a spokeswoman with the Association of American Colleges and Universities, said that Blackboard’s outcomes assessment program “could make trends that are already underway easier for schools.”

David Yaskin, vice president for product marketing at Blackboard, said that a key component of Blackboard’s system — which is in development — will likely be online portfolios that can be tracked in accordance with learning outcomes that are determined by faculty members, departments or institutions.

Yaskin said he’d like to see a system with “established outcomes, and a student has to provide evidence” of progress toward those outcomes, whether in the form of papers, photography collections or other relevant measures. Yaskin added that faculty members could create test questions as well, if they are so inclined, but that, for Blackboard’s part, the “current plan is not to use centralized testing in version 1.0, because higher ed is focused on higher orders of learning.”

One of the most powerful aspects of the program, Yaskin said, will likely be its ability to compile data and slice it in different ways. Institutions can create core sets of questions they want, for a course evaluation, for example, but individual departments and instructors can tailor other questions, and each level of the hierarchy can look at its own data. Yaskin said that it’s important to allow each level of that hierarchy to remain autonomous. He added that there should be a way for “faculty members to opt out” of providing the data they got from tailored questions to their superiors if they want. Otherwise, he said, faculty members might be reticent to make full use of the system to find out how courses can be improved.

Yaskin added that, if certain core outcomes are defined by a department, the department can use the system to track the progress of students as they move from lower to upper level courses.

Because Blackboard, which bought WebCT, has 3,650 clients, any service it can sell to its base could spread very quickly. While details on pricing aren’t available, the assessment services will be sold individually from course management software.

The idea of online evaluation is not new. Blackboard has been looking to colleges already using online course evaluations and assessments for ideas.

Washington University in St. Louis — which wasn’t one of the consulted institutions named by Blackboard — took over five years to develop an internal online course evaluation system. A faculty member in the anthropology department developed templates, and other faculty members can add specific questions. Students then have access to loads of numerical data, including average scores by department, but the comments are reserved for professors. Henry Biggs, associate dean of Washington University’s College of Arts and Sciences, was involved with the creation of the system, and said that too much flexibility can take away from the reliability of an evaluation or assessment system.

Washington University professors have to petition if they want their ratings withheld. “If faculty members can decide what to make public, there can be credibility issues,” Biggs said. “It’s great for faculty members to have a lot of options, but, essentially, by giving a lot of options you can create a very un-level playing field.”

Biggs said that the Blackboard system could be great for institutions that don’t have the resources to create their own system, but that a lot of time is required of faculty members and administrators to manage an assessment system even if the fundamental technology is in place. “The only way it can really work is if there are staff that are either hired, or redirected to focus entirely on getting that set up,” Biggs said. “I don’t think you will find professors with time to do that.”

Humphreys added that “the real time is the labor” from faculty members, and that technology often doesn’t make things so much easier, but may make something like assessments better. “People think of technology as saving time and money,” Humphries said. “It rarely is that, but it usually adds value,” like the ability to manipulate data extensively.

Some third-party course evaluation systems already offer tons of data services. OnlineCourseEvaluations.com has been working with institutions — about two dozen clients currently — for around three years doing online evaluations.

Online Course Evaluations, according to president Larry Piegza, also allows an institution to develop follow-up questions to evaluation questions. If an evaluation asks, for example, if an instructor spoke audibly and clearly, Piegza said, a follow-up question asking what could be done – use a microphone; face the students – to improve the situation can be set to pop up automatically. Additionally, faculty members can sort data by ratings, so they can see comments from all the students who ripped them, or who praised them, and check for a theme. “We want teachers to be able to answer the question, ‘how can I teach better tomorrow?’” Piegza said.

Daily Jolt, a site that has a different student-run information and networking page for each of about 100 institutions that host a page, is getting into the evaluation game, but the student-run evaluation game.

Mark Miller and Steve Bayle, the president and chief operating officer of Daily Jolt, hope to provide a more credible alternative to RateMyProfessors.com. Like RMP, Daily Jolt’s evaluations, which should be fully unveiled next fall, do not verify.edu e-mail addresses, but they do allow users to rate commentors, similarly to what eBay does with buyers and sellers, and readers can see all of the posts by a particular reviewer to get a sense of that reviewer.

Biggs acknowledged that student-run evaluation sites are here to stay, but said that, given the limited number of courses any single student evaluates, it’s unlikely that reviewing commentors will add a lot of credibility. Miller said that faculty members will be able to pose questions in forums that students can respond to.

“A lot of faculty members want to put this concept [of student run evaluations] in a box and make it go away,” Miller said. “That’s not going to happen, so we might as well see if we can do it in a respectful way.”

Continued in article

Jensen Comment
I think course evaluations should be private information between students in a class and the instructor. They should be required, but they should not be used in tenure, performance, and pay evaluations. One huge problem in is that if they are not private communications, research shows that they lead to grade inflation. Another huge problem is that students who fill out the evaluations are not personally accountable for lies, misguided humor, and frivolous actions. What students want is popular teachers who are not necessarily the best medicine for education.

Differences between "popular teacher"
versus "master teacher"
versus "mastery learning"
versus "master educator."
http://www.trinity.edu/rjensen/assess.htm#Teaching
 


Princeton University has announced success in its campaign against grade inflation.
In 2004, the university announced guidelines designed to limit the percentage of A grades, based on the belief that there were far too many being awarded. Data released this week by the university found that in 2004-7, A grades (A+, A, A-) accounted for 40.6 percent of grades in undergraduate courses, down from 47.0 percent in 2001-4. In humanities departments, A’s accounted for 45.9 percent of the grades in undergraduate courses in 2004-7, down from 55.5 percent in 2001-4. In the social sciences, there were 37.6 percent A grades in 2004-7, down from 43.3 percent in the previous three years. In the natural sciences, there were 35.7 percent A grades in 2004-7, compared to 37.2 percent in 2001-4. In engineering, the figures were 42.1 percent A’s in 2004-7, down from 50.2 percent in the previous three years.
Inside Higher Ed, September 19, 2007


"Fewer A’s at Princeton," by Scott Jaschik, Inside Higher Ed, September 20, 2005 --- http://www.insidehighered.com/news/2005/09/20/princeton

Princeton University students need to work harder for the A’s.

The university released results Monday of the first year under a new grading policy, designed to tackle the issue of grade inflation. In the last academic year, A’s (including plus and minus grades) accounted for 40.9 percent of all grades awarded. That may not be consistent with a bell curve, but the figure is down from 46.0 percent the previous year, and 47.9 percent the year before that.

Princeton’s goal is to have A’s account for less than 35 percent of the grades awarded. Nancy Malkiel, dean of the college at Princeton, said that based on progress during the first year, she thought the university would have no difficulty achieving that goal.

The data indicate that some fields have come quite close to the target while others lag. The only category that stayed the same the year the new policy took effect (natural sciences) was already near the target.

Percentage of Undergraduate A’s at Princeton, by Disciplinary Category

Discipline 2004-5 2003-4
Humanities 45.5% 56.2%
Social sciences 38.4% 42.5%
Natural sciences 36.4% 36.4%
Engineering 43.2% 48.0%

The university did not impose quotas, but asked each department to review grading policies and to discuss ways to bring grades down to the desired level. Departments in turn discussed expectations for different types of courses, and devised approaches to use. For independent study and thesis grades, the Princeton guidelines expect higher grades than for regular undergraduate courses, and that was the case last year.

Malkiel said that she wasn’t entirely certain about the differences among disciplines, but that, generally, it was easier for professors to bring grades down when they evaluate student work with exams and problem sets than with essays. She said that by sharing ideas among departments, however, she is confident that all disciplines can meet the targets.

Universities should take grade inflation seriously, she said, as a way to help their students.

“The issue here is how we do justice to our students in our capacity as educators, and we have a responsibility to show them the difference between their very best work and their good work, and if we are giving them the same grades for the very best work and for their good work, they won’t know the difference and we won’t stretch them as far as they are capable as stretching,” she said.

Despite the additional pressure on students who want A’s, she said, professors have not reported any increase in students complaining about or appealing the grades.

In discussions about grade inflation nationally, junior faculty members have complained that it is hard for them to be rigorous graders for fear of getting low student evaluations. Malkiel said that she understood the concern, and that Princeton’s approach — by focusing attention on the issue — would help. “What this institution is saying loud and clear is that all of us together are expected to be responsible. So if you have a culture where the senior faculty are behaving that way, it will make it easier for the junior faculty to behave that way.”

Melisa Gao, a senior at Princeton and editor in chief of The Daily Princetonian, said that student reactions to the tougher grading policy have varied, depending on what people study. Gao is a chemistry major and she said that the new policy isn’t seen as a change in her department.

Professors have drawn attention to the new policy at the beginning of courses, and Gao said that some students say that they are more stressed about earning A’s, but that there has not been any widespread criticism of the shift.

Many companies are recruiting on campus now, and Gao said that students have wondered if they would be hurt by their lower grades. Princeton officials have said that they are telling employers and graduate schools about the policy change, so students would not be punished by it.

But, Gao added, “at the end of the day, you have a number on a transcript.”


Controversial Student Evaluations of Their Instructors

In most instances, instructors are accountable for their grading and evaluations of students.  Virtually all colleges have grading appeals processes.  Beyond internal appeals processes are courts of law and millions of lawyers who just might help sue an instructor. 

Virtually all student evaluations of instructors are anonymous.  Anonymous students are not accountable in any way for their evaluations of instructors.   I've long been in favor of anonymous student evaluations, but I think the evaluations should only be seen by the instructors being evaluated.  My main criticism is that both anecdotal and formal research suggest that using anonymous evaluations for tenure, promotion, and salary decisions   compromises academic standards and course content.  It's a major source of grade inflation in the United States --- http://www.trinity.edu/rjensen/assess.htm#GradeInflation

When courses are evaluated by an entire class, outliers will hopefully be "averaged out" in a variety of ways.  On RateMyProfessor, the database is filled with mostly outliers which probably accounts for the fact that most evaluations give what constitutes either "A" grades or "F" grades implied in the comments about instructors.  There are too few responses, especially in a given year, for "averaging out."

Are professors upset with RateMyProfessor? I doubt that most know about it or care to know about it.
Such are some of the comments posted on RateMyProfessors -- a 6-year-old site that archives student critiques of most popular and least liked profs. With a database of more than 4 million ratings at more than 5,000 institutions of higher learning, the website has become a staple for many college students who use it to choose classes based on professors' evaluations.
Joanna Glasner, "Prof-Ratings Site Irks Academics," Wired News, September 29, 2005 --- http://www.wired.com/news/business/0,1367,68941,00.html?tw=wn_tophead_4
 

Jensen Comment
The RateMyProfessor site (for the U.S. and Canada) is at http://www.ratemyprofessors.com/index.jsp
When this site commenced six years ago, students tried to out do each other with humorous and highly caustic evaluations that seemingly were written more for entertainment than serious evaluation.  I sense that over time, the evaluations are more serious and are intended, in large measure, to be more informative about a course and an instructor.  However, the site still has a featured "Funny Ratings" tab that continues to encourage humor over seriousness --- http://www.ratemyprofessor.com/Funniest.html

One thing is entirely clear is that more and more professors are now being evaluated at this site.  Nearly 635,000 instructors from over 5,000 schools are now in the database, and thousands of evaluations are being added daily.  Rules for evaluations are available at http://www.ratemyprofessor.com/rater_guidelines.html

A continuing problem is that the evaluations are often given by outlier students who probably got very high or very low grades from an instructor they are praising/lambasting.  A moral hazard is that really disgruntled students may say untrue things, and that several disgruntled students may on occasion team up to make their evaluations sound consistent.  The comments are not necessarily reflective of the sentiments of the majority of students in a course, especially since all respondents constitute such miniscule percentage of students in most courses across the six years of building this RateMyProfessor database. 

But after reading the evaluations of many professors that I know, I think many students who send in comments these days want to be fair even to professors they don't particularly like personally.  Many show respect for the instructor even if they think the course is overly hard or overly boring.  Very often student comments focus on grading where instructors are rated as being either "very fair" or "extremely unfair with teacher's pets who get top grades no matter what."  I am always impressed when professors are rated as being extraordinarily tough and, at the same time, receive high evaluations from their students.  Virtually none of the students appreciate a course that features grappling with sweat-rendering ambiguity and a pedagogy of having to learn for themselves.

Always keep in mind that it's common for students to want a cut and dried course.  This type of course is not necessarily easy, but generally it does not make students grapple with ambiguity in content or ambiguity in the grading process.  Students always want to see the answer books or have the instructor explain the "best solution."  Unfortunately,  ambiguity in content and process is what they will later discover in the real world of adulthood. 

Top MBA programs often have a better idea when assigning complex and realistic cases where even the case writers themselves know of no right answers and suggest that the importance of case analysis is in the process rather than finding non-existent optimal answers.  Generally the most realistic problems in life have no optimal answers, but students hate a course where they are not rewarded gradewise for finding best or better answers.  The well-known maxim that "it only matters how you play the game" does not apply in the minds of students chasing "A" grades.

Except in rare instances, students are highly critical of instructors who force students to sweat and strain finding answers on their own.  For example, in my own university there was a first-year seminar course intended for student discussions of a new book every week.  Students despised a particular instructor who courageously never opened his own mouth in any class other than the first class of the semester.  This is most unfortunate since learning on your own is generally the best pedagogy for deep learning, long-term memory, creativity, and confrontations with ambiguity --- http://www.trinity.edu/rjensen/265wp.htm

Because a miniscule proportion of an instructors send messages to RateMyProfessor, the database should never be used for tenure, promotion, or performance evaluations.  Serious evaluations are impacted, in some cases very heavily, by formal course evaluations required by colleges and universities in all courses.  Instructor evaluation is a good thing when it inspires an instructor to improve in course preparation, course delivery, and other types of communications with students.  It is a bad thing when it motivates the instructor to give easier courses and/or become an easier grader. 

It would be interesting to know the course grades of the most negative students.  In most instances, instructors are accountable for their grading and evaluations of students.  Virtually all colleges have grading appeals processes.  Beyond internal appeals processes are courts of law and millions of lawyers who just might help sue an instructor.  Anonymous students are not accountable in any way for their evaluations of instructors.

Evidence from research into such matters indicates that a collegiate student evaluations do lead to easier grading in fear that low evaluations will adversely impact upon tenure outcomes, promotions, and salaries --- http://www.trinity.edu/rjensen/assess.htm#GradeInflation

I don't think RateMyProfessor has much impact on changing instructor behavior or grading, because most professors I know either don't know about this site or don't care to view this site because of the self-selection process of the few students who send messages to RateMyProfessor relative to the total number of students who do not send in messages.

There is also moral hazard if the site is ever used for serious performance evaluations.  Really unscrupulous professors might selectively request that a few "pets" submit evaluations to RateMyProfessor, especially when he/she knows these students will give glowing evaluations much higher than those of the majority of the class.  I don't think this is a problem up to now, because the site is never looked at regularly by most professors and administrators, at least not by those who I know.  But it is much easier to manipulate a few evaluations per instructor on the RateMyProfessor site relative to many evaluations when all students are asked to evaluate a course instructor in every course.

The RateMyProfessor site might have future impact when it comes to hiring faculty applicants seeking to change universities.  If that happens it would be most unfortunate due to the extreme limitations of the data gathering process.  Unfortunately one small rumor can destroy a career, and one small rumor can be commenced from something discovered in RateMyProfessor.

To the extent RateMyProfessor leads to false rumors resting upon so few respondents, this site is bad for the academy.  To the extent that it leads to popularity contests between instructors more concerned with student happiness than student learning, this site is bad for the academy. 

The one true fact in life is that our knowledge of the world has become so vast and so complex, that the only way for students to really learn is with sweat, tears, and inevitable frustration when dealing with ambiguities.  Students are often too ignorant (even if they are very bright) to understand that spoon feeding is not the best way to learn.  They are often to immature to realize that the best instructors are the ones who take the time and trouble to critique their work in depth.  They are also too ignorant in many instances to know what is very important relative to what is less important in course content.  Sometimes it takes years after graduation to be grateful for having learned something that seemed pointless or a waste of time years earlier.

In my own case an accounting professor named Kesselman, who I hated the worst in college, became the professor that I belatedly, after graduation, came to appreciate the most.  And a sweet and elderly teacher named Miss Miller, who told us so many interesting things about her life in our high school algebra class, became the one I appreciated the least in retrospect, because I was Miss Miller's top student who had to take algebra in my first semester at Iowa State University when I should've been ready to plunge into calculus.


Woebegone About Grade Inflation
Grade inflation continues to occupy the attention of the media, the academy and the public at large. As a few Ivy League universities have adjusted grading policies, and a few of their professors have captured headlines with their statements on the issue, people have taken note. Absent from this discussion, however, are the voices of the silent majority: those who teach at non-elite institutions, as well as those at elite institutions who are not publicly participating in the debate.
Janice McCabe and Brain Powell, "Woebegone About Grade Inflation," Inside Higher Ed, July 27, 2005 --- http://www.insidehighered.com/views/2005/07/27/mccabe


Grade Inflation and Abdication
Over the last generation, most colleges and universities have experienced considerable grade inflation. Much lamented by traditionalists and explained away or minimized by more permissive faculty, the phenomenon presents itself both as an increase in students’ grade point averages at graduation as well as an increase in high grades and a decrease in low grades recorded for individual courses. More prevalent in humanities and social science than in science and math courses and in elite private institutions than in public institutions, discussion about grade inflation generates a great deal of heat, if not always as much light. While the debate on the moral virtues of any particular form of grade distribution fascinates as cultural artifact, the variability of grading standards has a more practical consequence. As grades increasingly reflect an idiosyncratic and locally defined performance levels, their value for outside consumers of university products declines. Who knows what an “A” in American History means? Is the A student one of the top 10 percent in the class or one of the top 50 percent? Fuzziness in grading reflects a general fuzziness in defining clearly what we teach our students and what we expect of them. When asked to defend our grading practices by external observers — parents, employers, graduate schools, or professional schools — our answers tend toward a vague if earnest exposition on the complexity of learning, the motivational differences in evaluation techniques, and the pedagogical value of learning over grading. All of this may well be true in some abstract sense, but our consumers find our explanations unpersuasive and on occasion misleading.
John V. Lombardi, "Grade Inflation and Abdication," Inside Higher Ed, June 3, 2005 --- http://www.insidehighered.com/views/2005/06/03/lombardi
 

It is important to look for counter arguments that it is dysfunctional to make students compete for high grades.  Probably best article that higher grades are not a leading scandal in higher education appears in the following article.
"The Dangerous Myth of Grade Inflation," by Alfie Kohn, The Chronicle of Higher Education, September 8, 2002 --- http://www.alfiekohn.org/teaching/gi.htm 
Jensen's Comment:  Kohn's argument seems to boil down to a conclusion that it is immoral to make students compete for the highest grades.  But he fails to account for the fact that virtually all universities do make students compete for A grades.  There are simply a lot more winners (in some cases about 50%) in modern times.  How does he think this makes the very best students and the students who got below average B grades feel?  

Dartmouth's Answer
On May 23, 1994 the Faculty voted that transcripts and student grade reports should indicate, along with the grade earned, the median grade given in the class as well as the class enrollment. Departments may recommend, with approval of the Committee on Instruction, that certain courses (e.g., honors classes, independent study) be exempted from this provision. Courses with enrollments of less than ten will also be exempted. At the bottom of the transcript there will be a summary statement of the following type: 'Exceeded the median grade in 13 courses; equaled the median grade in 7 courses; below the median grade in 13 courses; 33 courses taken eligible for this comparison.' This provision applies to members of the Class of 1998 and later classes.
"Median Grades for Undergraduate Courses" --- http://www.dartmouth.edu/~reg/courses/medians/index.html 

The Emperor’s Not Wearing Any Clothes
“But he has nothing on at all,” said a little child at last. “Good heavens! listen to the voice of an innocent child,” said the father, and one whispered to the other what the child had said. “But he has nothing on at all,” cried at last the whole people. That made a deep impression upon the emperor, for it seemed to him that they were right; but he thought to himself, “Now I must bear up to the end.” And the chamberlains walked with still greater dignity, as if they carried the train which did not exist.
Hans Christian Andersen  New Suit," (1837) --- http://hca.gilead.org.il/emperor.html 

And many students get the highest grades with superficial effort and sometimes with humor
It may be hard to get into Harvard, but it's easy to get out without learning much of enduring value at all. A recent graduate's report by Ross Douthat
"
The Truth About Harvard," by Ross Douthat, The Atlantic, March 2005 --- http://www.theatlantic.com/doc/print/200503/douthat 

At the beginning of every term Harvard students enjoy a one-week "shopping period," during which they can sample as many courses as they like and thus—or so the theory goes—concoct the most appropriate schedule for their semesters. There is a boisterous quality to this stretch, a sense of intellectual possibility, as people pop in and out of lecture halls, grabbing syllabi and listening for twenty minutes or so before darting away to other classes.

The enthusiasm evaporates quickly once the shopping period ends. Empty seats in the various halls and auditoriums multiply as the semester rattles along, until rooms that were full for the opening lecture resemble the stadium of a losing baseball team during a meaningless late-August game. There are pockets of diehards in the front rows, avidly taking notes, and scattered observers elsewhere—students who overcame the urge to hit the snooze button and hauled themselves to class, only to realize that they've missed so many lectures and fallen so far behind that taking notes is a futile exercise. Better to wait for the semester's end, when they can take exhaustive notes at the review sessions that are always helpfully provided—or simply go to the course's Web site, where the professor has uploaded his lecture notes, understanding all too well the character and study habits of his seldom-glimpsed students.

Continued in article

Harvard University's grading policy is outlined at http://www.registrar.fas.harvard.edu/handbooks/instructor.2003-2004/chapter5/grading.html 
Also see http://www.registrar.fas.harvard.edu/handbooks/instructor.2003-2004/chapter5/rank_list.html 

Half the undergraduate students at Harvard get A or A- (up from a third in 1985)
Less than 10% get a C or below

All Things Considered, November 21, 2001 · Student's grades at Harvard University have soared in the last 10 years. According to a report issued Tuesday by the dean of undergraduate education, nearly half of the grades issued last year were A's or A-minuses. In 1985, just a third of the grades were A or A-minus. Linda Wertheimer talks with Susan Pedersen, Dean of Undergraduate Education and a Professor of History at Harvard University, about grade inflation.
Harvard Grade Inflation, National Public Radio --- http://www.npr.org/templates/story/story.php?storyId=1133702 
You can also listen to the NPR radio broadcast about this at the above link.

Can no longer reward the very best with higher grades
Students at Harvard who easily get A's may be smarter, but with so many of them, professors can no longer reward the very best with higher grades. Losing this motivational tool could, paradoxically, cause achievement to fall.

"Doubling of A's at Harvard: Grade inflation or brains?" By Richard Rothstein, The New York Times, December 5, 2001 --- http://www.epinet.org/content.cfm/webfeat_lessons20011205 

A Harvard University report last spring complained of grade inflation that makes it easier to get high grades. Now the academic dean, Susan Pedersen, has released data showing that 49 percent of undergraduate grades were A's in 2001, up considerably from 23 percent in 1986.

Colleges and high schools are often accused of tolerating grade inflation, because teachers have adopted lower standards and hesitate to confront lower-performing students. Critics warn that if grading is too easy, learning will lag.

But grade inflation is harder to detect than it seems.

Inflation means giving a higher value to the same thing that once had a lower one. The Bureau of Labor Statistics tracks price inflation, but it is not easy. Automobile
prices have gone up, but cars now have air bags and electronic ignitions. Consumers today pay more not only for the same thing but for a better thing. These factors are hard to untangle.

Grade inflation is similarly complicated. More A's could be a result of smarter students. Ivy League colleges compute an academic index for freshmen based on their College Board SAT and achievement test scores. Harvard's index numbers have been rising, and few students have numbers that were common at the low end of the class 15 years ago. So if students are more proficient, there should be more A's, even if grading is just as strict.

At Harvard, Dean Pedersen noted that students might study harder than before, perhaps because graduate schools are more competitive. Classes are now smaller, so better teaching could result in better learning. More A's would then reflect more achievement, not inflation.

What grades measure can also change. Harvard professors now say they demand more reasoning and less memorization. Whether or not this is desirable, higher grades that follow may not be inflationary. Government price surveyors face similar problems when products change: if consumers who once shopped at Sears now buy the same shirt at Nordstrom, are they paying more for the same thing (inflation) or for a different thing (more service)?

Dr. Pedersen agrees that higher grades may sometimes be given for the same work. But she doubts that inflation is the main cause of the rise in grades. Another dean, Harry R. Lewis, calculated that Harvard grades rose as much from 1930 to 1966 as from 1967 to the present, so the trend is not new. Neither are accusations of inflation: a Harvard report in 1894 also warned that grades of A and B had become too easy.

Grade inflation in high schools is elusive as well. RAND researchers found there was actually some national grade deflation from 1982 to 1992 — students with the same math scores got lower grades at the end of the period than at the start.

But seniors with similar scores on entrance exams (the SAT and ACT) now have slightly higher grades than before. Perhaps this inconsistency results from inflation affecting top students (those likely to take the exams) more than others. Or perhaps grades deflated from 1982 to 1992, but inflated at other times.

Since 1993, the State of Georgia has given free college tuition to students with B averages. Critics say grade inflation resulted because, with B's worth a lot of money, high school teachers now give borderline students a greater benefit of the doubt.

But if a promise of scholarships led students to work harder, higher grades would not signal inflation. And indeed, one study found that Georgia's black students with B averages had higher SAT scores than before the program began.

Even if inflation is less than it seems, rising grades pose a problem that rising prices do not. Prices can rise without limit, but grades cannot go above A+. When more students get A's, grades no longer can show which ones are doing truly superior work. This is called "grade compression" and is probably a more serious problem than inflation.

Students at Harvard who easily get A's may be smarter, but with so many of them, professors can no longer reward the very best with higher grades. Losing this motivational tool could, paradoxically, cause achievement to fall.

Continued in the article

Students get two grades from Harvey Mansfield at Harvard University
"The Truth About Harvard," by Ross Douthat, The Atlantic, March 2005 --- http://www.theatlantic.com/doc/print/200503/douthat 
Bob Jensen's threads on grade inflation are at http://www.trinity.edu/rjensen/assess.htm#GradeInflation 

He paused, flashed his grin, and went on. "Nevertheless, I have recently decided that hewing to the older standard is fruitless when no one else does, because all I succeed in doing is punishing students for taking classes with me. Therefore I have decided that this semester I will issue two grades to each of you. The first will be the grade that you actually deserve —a C for mediocre work, a B for good work, and an A for excellence. This one will be issued to you alone, for every paper and exam that you complete. The second grade, computed only at semester's end, will be your, ah, ironic grade — 'ironic' in this case being a word used to mean lying —and it will be computed on a scale that takes as its mean the average Harvard grade, the B-plus. This higher grade will be sent to the registrar's office, and will appear on your transcript. It will be your public grade, you might say, and it will ensure, as I have said, that you will not be penalized for taking a class with me." Another shark's grin. "And of course, only you will know whether you actually deserve it." 

Mansfield had been fighting this battle for years, long enough to have earned the sobriquet "C-minus" from his students, and long enough that his frequent complaints about waning academic standards were routinely dismissed by Harvard's higher-ups as the out-of-touch crankiness of a conservative fogey. But the ironic-grade announcement changed all that. Soon afterward his photo appeared on the front page of The Boston Globe, alongside a story about the decline of academic standards. Suddenly Harvard found itself mocked as the academic equivalent of Garrison Keillor's Lake Wobegon, where all the children are above average.

You've got to be unimaginatively lazy or dumb to get a C at Harvard (less than 10% get below a B-)
Harvard does not admit dumb students, so the C students must be unimaginative, troubled, and/or very lazy.
It doesn't help that Harvard students are creatively lazy, gifted at working smarter rather than harder. Most of my classmates were studious primarily in our avoidance of academic work, and brilliant largely in our maneuverings to achieve a maximal GPA in return for minimal effort.
"
The Truth About Harvard," by Ross Douthat, The Atlantic, March 2005 --- http://www.theatlantic.com/doc/print/200503/douthat 

This may be partly true, but I think that the roots of grade inflation —and, by extension, the overall ease and lack of seriousness in Harvard's undergraduate academic culture —run deeper. Understanding grade inflation requires understanding the nature of modern Harvard and of elite education in general —particularly the ambitions of its students and professors. 

The students' ambitions are those of a well-trained meritocratic elite. In the semi-aristocracy that Harvard once was, students could accept Cs, because they knew their prospects in life had more to do with family fortunes and connections than with GPAs. In today's meritocracy this situation no longer obtains. Even if you could live off your parents' wealth, the ethos of the meritocracy holds that you shouldn't, because your worth as a person is determined not by clan or class but by what you do and whether you succeed at it. What you do, in turn, hinges in no small part on what is on your résumé, including your GPA. 

Thus the professor is not just a disinterested pedagogue. As a dispenser of grades he is a gatekeeper to worldly success. And in that capacity professors face upward pressure from students ("I can't afford a B if I want to get into law school"); horizontal pressure from their colleagues, to which even Mansfield gave way; downward pressure from the administration ("If you want to fail someone, you have to be prepared for a very long, painful battle with the higher echelons," one professor told the Crimson); and perhaps pressure from within, from the part of them that sympathizes with students' careerism. (Academics, after all, have ambitions of their own, and are well aware of the vicissitudes of the marketplace.) 

It doesn't help that Harvard students are creatively lazy, gifted at working smarter rather than harder. Most of my classmates were studious primarily in our avoidance of academic work, and brilliant largely in our maneuverings to achieve a maximal GPA in return for minimal effort. It was easy to see the classroom as just another résumé-padding opportunity, a place to collect the grade (and recommendation) necessary to get to the next station in life. If that grade could be obtained while reading a tenth of the books on the syllabus, so much the better.


February 21, 2005 message from Bob Jensen

Below is a message from the former Dean of Humanities at Trinity University. He’s now an emeritus professor of religion.

In particular, he claims Harvard had an A+ grade for recognizing the very top students in a course. I think Harvard and most other universities have dropped this grade alternative.

Second he claims that there was a point system attached to the grades. Note especially the gap in the point weightings between A- and B+ and C- and D+.

If this was used at Harvard for a period of time, it was dropped somewhere along the way. This is unfortunate because it created a means by which the top (A+) students could be recognized apart from those many A students and those average students (the median grade at Harvard is now A-). The point system provided a means of breaking down the many 4.0 gpa graduates at Harvard.

Harvard University's current grading policy is outlined at http://www.registrar.fas.harvard.edu/handbooks/instructor.2003-2004/chapter5/grading.html 

Also see http://www.registrar.fas.harvard.edu/handbooks/instructor.2003-2004/chapter5/rank_list.html 

Bob Jensen

-----Original Message----- 
From: Walker, Wm O. 
Sent: Sunday, February 20, 2005 3:50 PM 
To: Jensen, Robert Subject: RE: Bill Walker Question

Bob, all I know is what my son told me while he was an undergraduate student at Harvard (1975-1979). As I recall, the scale was the following:

15 A+

14 A

13 A-

11 B+

10 B

9 B-

7 C+

6 C

5 C-

3 D+

2 D

1 D-

They may have changed the system sometime during the past twenty-six years. I particularly like it because it not only gives the plus and minus grades but also makes a greater distinction between A- and B+ than between B+ and B, etc.


Question
How do Princeton, Dartmouth and some other universities deal with grade inflation, at least ?

Princeton University takes a (modest) stand on grade inflation
"Deflating the easy 'A'," by Teresa Méndez, Christian Science Monitor, May 4, 2004 --- http://www.csmonitor.com/2004/0504/p12s02-legn.html  

For an analysis of this see http://www.trinity.edu/rjensen/assess.htm#GradeInflation 


Answers as of 1996
The answers as of 1996 lie buried in the online article at http://www.princeton.edu/~paw/archive_old/PAW95-96/11_9596/0306note.html#story4 

Are Students Getting Smarter?

Or are professors just pressured to give out more A's?

Students are getting smarter-or so it seems by the increasingly higher grades they're receiving. Last year, undergraduates earned 8 percent more A's than they did just seven years ago and more than twice as many as they did in 1969-70. In 1994-95, 41 percent of all grades awarded were A's and 42 percent were B's, according to the Office of the Registrar.

Princeton didn't invent grade inflation. According to Registrar C. Anthony Broh, it's a phenomena of private highly selective institutions. Yet at the same time as grades are creeping up at Princeton, undergraduate grades nationwide have been going down, according to a federal study released last October. The drop, said Clifford Adelman, a senior research analyst for the Department of Education, is due to a 37 percent increase in the number of people attending college.

Public colleges aren't experiencing grade inflation-a continual increase in the average grade, explained Broh-at the same rate as highly selective institutions, because their curricula are structured differently. Ohio State's curriculum, for example, is designed to weed out students, said Broh.
Princeton saw grades inflate in the late 1960s and early 1970s. The percentage of all grades that were A's jumped from 17 percent in 1969-70 to 30 percent in 1974-75. Students earned higher grades at Princeton and other institutions, in part, because of the Vietnam War. Students whose grade-point averages dropped too low were drafted, said Broh, "so faculty generally felt pressure" to give high marks.

The percentages among grades remain-ed fairly constant from the late 1970s through the early 1980s. In 1987-88, 33 percent of grades were A's. Since then, grades have risen at about the same rate as they did during the early 1970s. The primary reason for the jump, said Broh, is that professors feel some pressure from students to give higher grades so they can better compete for admission to graduate and professional schools.
Princeton's grade distribution is comparable to that of its peer institutions. At Dartmouth the percentage of all grades that are A's rose from 33 percent in 1977-78 to 43 percent in 1993-94, according to Associate Registrar Nancy Broadhead. At Harvard, the hybrid grade A/A- represented 22 percent of all grades in 1966-67 and 43 percent in 1991-92, said spokeswoman Susan Green. C's have virtually disappeared from Harvard transcripts, reported Harvard Magazine in 1993.
Students aren't the only ones who apply subtle pressure to professors. Several years ago, an instructor of linear algebra gave a third of the class C's, and there was "a big uproar," said Joseph J. Kohn *56, the chairman of the mathematics department. He received a "long letter" from a dean who suggested that that kind of grading would discourage the students.

Ten years ago, a third of a class earning C's was normal, said Kohn. Professors feel they're supposed to grade "efforts," not the product, he added.

Another reason for grade inflation, said Broh, is that students are taking fewer courses Pass/D/Fail, which since 1990-91 have been limited to one per term for each student. Therefore, students are earning more A's and B's and fewer P's.

Some observers believe that students are just smarter than they were 25 years ago, and they're working harder. The SAT scores continue to rise, noted Broh.

Even if a professor wanted to "deflate" grades, one person can't expect to "unilaterally try to reinvent grading," said Lee C. Mitchell, the chairman of the English department. One professor alone would be "demonized," if he or she tried to grade "accurately," said Clarence F. Brown, Jr., a professor of comparative literature. "The language of grading is utterly debased," he added, noting that real grading is relegated to letters of recommendation, a kind of "secret grading."
Not every professor and student on campus has succumbed to grade inflation, however. In the mind of Dean of the School of Engineering and Applied Science James Wei, a C is still average. Professors in the engineering school still regularly give grades below B's, though "students are indignant," he said.
According to Dean of the College Nancy Weiss Malkiel, the university periodically reviews grade distribution. The administration encourages faculty members to think carefully about grading patterns, but "we don't tell [them] what grades to give," said Malkiel.

Harvard isn't planning on doing anything about the shift in grades, said Green. Dartmouth, however, last year changed its grading policy. In an effort to assess student performance more effectively, report cards and transcripts now include not only grades, but also the median grade earned by the class and the size of the class. The change may also affect grade inflation, but it's too soon to tell if it has, said Broadhead.
In the end, perhaps grade inflation is inconsequential. As Kohn said, "The important thing is what students learn, not what [grades] they get." And as Dean of the Faculty Amy Gutmann told The Daily Princetonian, "There is no problem [with grade inflation] as long as grades reflect the quality of work done."

Chart:  The graphic is not available online
Infografic by Jeff Dionise; Source: Office of the Registrar

This chart, provided by the Office of the Registrar, shows the percentage of grades awarded over the last 25 years. The percentage of A's and B's increased markedly in the late 1960s and early 1970s and again since the late 1980s. The percentage of P's (pass) dropped dramatically in the early 1970s, in part because the Pass/D/Fail option lost favor among students for fear that those evaluating their academic careers would think they took lighter loads, said Registrar C. Anthony Broh. Also, the university now allows fewer courses to be taken Pass/D/Fail. The percentage of P's peaked in 1969-70, when students went on strike during the Vietnam War and sympathetic faculty gave them the option of receiving either a P or a normal grade. Many students opted for P's, said Broh.

Are Students Getting Smarter?
Or are professors just pressured to give out more A's?
The real issue isn't grade inflation, said Registrar C. Anthony Broh, it's grade "compression." Because most grades awarded are A's and B's, it's hard to differentiate between students at the top of a course.


February 20, 2005 reply from Glen Gray [glen.gray@CSUN.EDU

If you are worried about grade inflations, think about this: a dean of a well-known research university (sorry, I can’t say who) sent a memo to his faculty suggesting that they RAISE the average GPA because grade inflation at other institutions are putting his students at a competitive disadvantage. So, now we may have a race to who has the highest average GPA.

February 20, 2005 reply from Roger Collins [rcollins@CARIBOO.BC.CA

I think the following is unlikely to fly in the continuous assessment environment, but for seven years in the 70's/80s I taught at a UK institution where all major exams (these accounted for around 80% of course marks and held once per year) were double marked - once by the instructor directly responsible for the class and once by an associate from the same department. Exam results were also reviewed by a committee responsible for the degree, and samples sent off to an external examiner (one of our externals was a certain David Tweedie).

This method is VERY effective at combating student pressure on instructors, but fairly time-consuming; unless Faculty accept it (we did) as part of normal work-load it may also become expensive....

Regards,

Roger

Roger Collins 
Associate Professor UCC (soon to be TRU) School of Business


No wonder kids take the easy way out:  The era of work and sacrifice is long gone
The pressure for U.S. high schools to toughen up is growing. But when schools respond with stiffened requirements, as many have done by instituting senior projects, they often find that students and parents aren't afraid to fight back.
Robert Tomsho, "When High Schools Try Getting Tough, Parents Fight Back," The Wall Street Journal, February 8, 2005, Page A1 --- http://online.wsj.com/article/0,,SB110782391032448413,00.html?mod=todays_us_page_one 
In Duvall, Wash., Projects Required Months of Work -- Then Parental Protests Kicked In 


Fearing your student evaluations, how much time and trouble should you devote to email questions from your students?
For junior faculty members, the barrage of e-mail has brought new tension into their work lives, some say, as they struggle with how to respond. Their tenure prospects, they realize, may rest in part on student evaluations of their accessibility. The stakes are different for professors today than they were even a decade ago, said Patricia Ewick, chairwoman of the sociology department at Clark University in Massachusetts, explaining that "students are constantly asked to fill out evaluations of individual faculty." Students also frequently post their own evaluations on Web sites like www.ratemyprofessors.com  and describe their impressions of their professors on blogs.
Jonathan D. Glater, "To: Professor@University.edu Subject: Why It's All About Me," The New York Times, February 21, 2006 --- http://www.nytimes.com/2006/02/21/education/21professors.html

Bob Jensen's threads on the dark side of education technology --- http://www.trinity.edu/rjensen/000aaa/theworry.htm


Reed College, a selective liberal arts college in Oregon, where the average grade-point average has remained a sobering 2.9 (on a 4.0 scale) for 19 years.
See below

Valen E. Johnson, a biostatistics professor at the University of Michigan and author of "Grade Inflation: A Crisis in College Education" (Springer Verlag), said the use of student ratings to evaluate teachers also inflates grades: "As long as our evaluations depend on their opinion of us, their grades are going to be high."
See below

Administrators and some faculty at some of the country's top universities have proposed correcting for so-called grade inflation by limiting A's.  It's relatively easy to get an A at Princeton, but it's easier at Harvard.


"Is It Grade Inflation, or Are Students Just Smarter?" by Karen W. Arenson, The New York Times, April 18, 2004 ---  http://www.nytimes.com/2004/04/18/weekinreview/18aren.html 

MILLION dollars isn't what it used to be, and neither is an A in college.

A's - including A-pluses and A-minuses - make up about half the grades at many elite schools, according to a recent survey by Princeton of the Ivy League and several other leading universities.

At Princeton, where A's accounted for 47 percent of grades last year, up from 31 percent in the 1970's, administrators and some faculty have proposed correcting for so-called grade inflation by limiting A's to 35 percent of course grades.

Not everyone is convinced there is a problem. A recent study by Clifford Adelman of the United States Department of Education concluded that there were only minor changes in grade distributions between the 1970's and the 1990's, even at highly selective institutions. (A bigger change, he said, was the rise in the number of students withdrawing from courses and repeating courses for higher grades.)

Alfie Kohn, author of the coming book "More Essays on Standards, Grading and Other Follies" (Beacon Press), says rising grades "don't in itself prove that grade inflation exists.''

"It's necessary to show - and, to the best of my knowledge, it has never been shown - that those higher grades are undeserved,'' he said.

Is it possible that the A students deserve their A's?

Getting into colleges like Princeton is far more difficult than it used to be. And increasing numbers of students are being bred like racehorses to breeze through standardized tests and to write essays combining Albert Einstein's brilliance with Mother Teresa's compassion.

Partly to impress admissions officers, students are loading up on Advanced Placement courses. The College Board said the number taking 10 or more such courses in high school is more than 10 times what it was a decade ago. And classes aimed at helping them do better on the SAT exams are booming.

"Back in 1977, when I graduated from high school, it had to be less than 25,000 students nationally who spent more than $100 on preparing for the SAT," said John Katzman, founder and chief executive of The Princeton Review, which tutors about 60,000 students a year for the SAT's. "It was the C students who prepped, not the A students," he added. "Now it's got to be circa 200,000 or 250,000 students who are going to spend more than $400 to prepare for the SAT."

But Wayne Camara, vice president of research at the College Board, said that while students are increasingly well prepared, "that in no way accounts for the shift in grades we are seeing.''

"Grades are not like temperatures or weights,'' he said. "What constitutes an A or a B has changed, both in high school and in college."

He said teachers are aware of how competitive the academic world has become and try to help students by giving better grades. "If you graduated from college in the 1950's and you wanted to go to law school or a graduate program, you could," Dr. Camara said. "Today it is very difficult. You are not going to be able to graduate from Harvard or Princeton with a 2.8 grade point average and get into Georgetown Law."

In addition, one recent Princeton graduate who works in investment banking and has participated in recruiting meetings cautioned in a letter to The Daily Princetonian that hiring practices can be superficial, and that grade-point averages are one of the first items scrutinized on a résumé.

Stuart Rojstaczer, a geology professor at Duke who runs the Web site www.Gradeinflation.com, says that higher grades are the result of a culture where the student-consumer is king. "We don't want to offend students or parents," he said. "They are customers and the customer is always right."

Valen E. Johnson, a biostatistics professor at the University of Michigan and author of "Grade Inflation: A Crisis in College Education" (Springer Verlag), said the use of student ratings to evaluate teachers also inflates grades: "As long as our evaluations depend on their opinion of us, their grades are going to be high."

Even if the Princeton plan is approved, Professor Johnson, who unsuccessfully tried to lower grades at Duke University a few years ago, cautioned that reform is difficult. "It is not in the interest of the majority to reform the system," he said. "Assigning grades, particularly low grades, is tough, and it requires more work, since low grades have to be backed up with evidence of poor performance."

But Princeton and others may take some comfort from Reed College, a selective liberal arts college in Oregon, where the average grade-point average has remained a sobering 2.9 (on a 4.0 scale) for 19 years.

The college says it ranks third among all colleges and universities in the proportion of students who go on for Ph.D.s, and has produced more than 50 Fulbright Scholars and 31 Rhodes scholars.

Still, Colin S. Diver, Reed's president, says graduate schools worried about their rankings are becoming less willing to take students with lower grades because they make the graduate schools appear less selective.

"If they admit someone with a 3.0 from Reed who is in the upper half of the class, that counts against them, even if it is a terrific student," Mr. Diver said. "I keep saying to my colleagues here that we can hold ourselves out of the market for only so long."


This might set a legal precedent for all colleges and universities.
It also might root out instructors who give high grades in hopes of higher student evaluations.

The student newspaper at Oklahoma State University has won a three-month fight to get records in an electronic format regarding the grades professors give students.  School officials say the names of the students will be blacked out.  Sean Hill, a journalism student and editor of The Daily O'Collegian, requested the information in November so he could compare the average grades of different sections of the same classes. The Oklahoman in Oklahoma City and the Tulsa World joined in the request.  School officials said at the time they would provide the records, but not in an electronic format. OSU spokesman Gary Shutt now says the school can provide records in the electronic format without jeopardizing student privacy and confidentiality.
Editor and Publisher, February 10, 2005 --- http://www.editorandpublisher.com/eandp/news/article_display.jsp?vnu_content_id=1000798377 

Jensen Comment:  
Backed by studies such as the huge studies at Duke and other colleges mentioned below,  I've long contended that student evaluations, when they heavily impact tenure granting and performance evaluations, present a moral hazard and lead to grade inflation across the campus.  But I'm wary of using data such as that described above to root out "easy" graders.  Some instructors may be giving out higher grades after forcing out weaker students before the course dropping deadline and/or by scaring off weak students by sheer reputation for being tough.  

My solution, for the moral hazard of grade inflation caused by fear of student course evaluations, entails having colleges create Teaching Quality Control (TQC) Departments that act in strict confidentiality when counseling instructors receiving low student evaluations.  The student evaluations themselves should be communicated only to the TQC Department and the instructors.  Because of moral hazard, student evaluations should not be factored into tenure decisions or performance evaluations.  In order to evaluate teaching for tenure and performance evaluations, instructors should take turns sitting in other instructor courses with the proviso that no instructor sits in on a course in that instructor's department/school.  In other words, English instructors should sit in on accounting courses and vice versa.  This need not entail sitting in on all classes, and incentives must be provided for faculty to take on the added workload.  

I think that this coupled with a TOC confidential counseling operation will make good teachers even better as well as reduce bad teaching and grade inflation that has caused some schools like Princeton University to put caps on the number of A grades.  Corporations have Quality Control departments.  Why shouldn't colleges try improving quality in a similar manner?  The average course grade across many campuses is now a B+ to an A-.  Student evaluations are a major factor, if not the major factor, in giving a C grade a bad name.  I'm not saying that professors with high student evaluations are easy graders.  What I am saying is that weak or unprepared teachers are giving easy grades to improve their own students' evaluations. 

Student evaluations of instructors are even more of a moral hazard when they are made available to students and the public at large.


Some Reasons Harvard University Does Not Require Student Evaluations
Student course evaluations are ubiquitous these days, whether they be at a national site like ratemyprofessors.com or sponsored by individual institutions. But Harvard University faculty members are split on whether evaluations should be mandatory . . . Harvey C. Mansfield, a professor of government, reminded colleagues at the Tuesday meeting that there are plenty of pitfalls to evaluations. He said that evaluations promote “the rule of the less wise over the more wise … on the assumption students know best.” Mansfield called requiring evaluations an “intrusion on the sovereignty of the classroom,” and said that evaluations “reward popular teachers at the expense of serious teachers … popular teachers can be serious but many are not, and many teachers are serious but not popular.” Mansfield added that he would like to hear more discussion of evaluations, and to see their role diminished rather than increased.

David Epstein, "One Size Doesn’t Fit All," Inside Higher Ed, May 4, 2006 --- http://www.insidehighered.com/news/2006/05/04/harvard


Is grade deflation hitting the Ivy League?  

"Deflating the easy 'A'," by Teresa Méndez, Christian Science Monitor, May 4, 2004 --- http://www.csmonitor.com/2004/0504/p12s02-legn.html 

Princeton students fear that a tough stance on grades may harm campus culture - and limit their appeal to graduate schools.

When Adam Kopald exits Princeton University's gothic gates as a graduate in June 2005, he will not have a GPA. Nor will he be assigned a class rank. He may not even know the grades of his closest friends. 

It's this lack of competition, say Princeton students, that has made for a much less cutthroat environment than one might expect from one of the country's most academically elite universities.

Some students argue that that's been a good thing for their school, where they say they strive to do their own best work rather than to outdo one another - but it's a luxury they now fear losing.

A new grading policy, to go into effect next year, will reduce the number of A-pluses, A's, and A-minuses for all courses to 35 percent, down from the current 46 percent. A's given for independent work will be capped at 55 percent.

"There's definitely going to be a competition that didn't exist before," says Mr. Kopald, a history major. "Because any way you cut it, there are only 35 percent of people who are going to get A's."

At a time when campuses are clamoring to appear more interested in the whole person, students' mental health, and well-rounded development, some wonder if the message being sent by instituting quotas isn't contradictory.

School administrators, however, argue that grade inflation cannot be ignored. Princeton first examined the problem six years ago.

"Our feeling then was that we could just let it go, and over the next 25 years everyone would be getting all A's," says Nancy Weiss Malkiel, dean of the college. "But would that really be responsible in terms of the way we educated our students?"

According to Dean Malkiel, the goals of 35 percent and 55 percent will align the number of A's granted with figures from the late 1980s and early '90s.

Other schools have tried to address grade inflation, using measures like including contextual information on transcripts, says Malkiel. And in 2002, Harvard limited students graduating with honors to 60 percent. But as far as Malkiel knows, this is the first widespread move to stem the trend of upward spiraling grades that dates back to the 1970s.

What caused grades to inflate

Experts blame grade inflation on everything from fears of the draft during the Vietnam War to a consumer mentality that expects higher marks in exchange for steeper tuition.

But some professors say students today are increasingly bold about haggling for higher marks. Often it's easier to give an A-minus instead of a B-plus than to argue.

Malkiel also says a broader culture of inflation may be a factor. Everything from high school GPAs to SAT scores have been on the rise.

But not all see the phenomenon of rising grades as a bad thing. William Coplin, a professor at the Maxwell School at Syracuse University, feels strongly there are a number of reasons why grade inflation is not just acceptable - but good.

He says that students learn in the classroom less than half of what they need to know for real life. Distributing higher grades gives them room to explore other areas of interest and to develop as people.

"Most students do not see college as a place to develop skills. They see it as a place to get a degree and have a high GPA," he says. "The truth is, skills are more important than GPA." Professor Coplin worries that attempting to stamp out grade inflation is simply "making the kids even crazier about grades."

Annie Ostrager, a politics major at Princeton, isn't convinced that grade inflation is a problem either.

"I personally have not perceived my grades to be inflated," says the junior. "I work hard and get good grades. But I don't really feel like grades are flying around that people aren't earning."

But most Princeton students acknowledge there is a problem - although many doubt that quotas are the best solution.

Matt Margolin, president of the student government, estimates that 325 of the 350 e-mails he has received from Princeton students express frustration with the new grading policy.

Princeton isn't alone in the battle against inflated grades. A study last year found that A's accounted for 44 to 55 percent of grades in the Ivy League, MIT, Stanford, and the University of Chicago.

Will Princeton stand alone?

Yet by drawing public attention to Princeton in particular, students worry it may come to be seen as the most flagrant example.

"Putting it in the public light like this has really damaged the image of a Princeton transcript," says Robert Wong, a sophomore studying molecular biology.

Malkiel has assured students this isn't true. In conversations with admissions officers at graduate schools, employers, and fellowship coordinators across the country, she says she has been told "that they would know going forward that a Princeton A was a real A." They even suggested that tougher grading will ultimately benefit Princeton students.

But not everyone is convinced.

"I would like to go to law school, so my eye has been on this proposal very carefully," says Mr. Margolin, a junior and a politics major. "My understanding is that law school decides your fate based mostly on GPA and LSAT scores."

"A call for an end to grade inflation," by Mary Beth Marklein, USA Today, May 2, 2002 --- http://www.usatoday.com/news/health/2002-02-05-grade-inflation.htm 

At Harvard University, a recent study found that nearly half of all grades awarded were A or A-minus.

A tenured professor is suing Temple University, saying he was fired because he wouldn't make his courses easier or give students higher grades.

And now, a new report prepared by the American Academy of Arts & Sciences says it's time to put an end to grade inflation.

Concerns about grade inflation, defined as an upward shift in the grade-point average without a corresponding increase in student achievement, are not new. The report cites evidence from national studies beginning as early as 1960. And while it is a national phenomenon, authors Henry Rosovsky, a former Harvard dean, and Matthew Hartley, a lecturer at the University of Pennsylvania, say the phenomenon is "especially noticeable" in the Ivy League.

They blame the rise of grade inflation in higher education on a complex web of factors, including:

An administrative response to campus turmoil in the 1960s, and a trend, begun in the 1980s, in which universities operate like businesses for student clients.

The advent of student evaluations of professors and the increasing role of part-time instructors.

Watered-down course content, along with changes in curricular and grading policies.

"At first glance (grade inflation) may appear to be of little consequence," the authors write. But it "creates internal confusion giving students and colleagues less accurate information; it leads to individual injustices (and) it may also engender confusion for graduate schools and employers." They say schools should establish tangible and consistent standards, formulate alternative grading systems and create a standard distribution curve in each class to act as a yardstick.

Rosovsky and Hartley's report is available at www.amacad.org/publications/occasional.htm

May 4, 2004 reply from Hertel, Paula [phertel@trinity.edu

I just now heard on NPR an interview with one of the Princeton faculty who voted for the new policy to limit A’s to 35%. She (a professor of economics) pointed out that one of the biggest factors in establishing grade inflation is the perception of faculty that course evaluations will be lower if grades are lower. We should add that, even if the perception is wrong, it’s existence and influence does our students no favor in the long run.

It’s the nature of the course evaluations that must change!

Paula

 

May 4, 2004 reply from Bob Jensen

Trinity University professors may have too much integrity to allow student evaluations to inflate grades. However, we do have marked grade inflation caused by something. Research studies at other universities found that tough graders take a beating on course evaluations:

Duke University Study --- http://www.aas.duke.edu/development/Miscellaneous/grades.html 

Lenient graders tend to support one theory for these findings: students with good teachers learn more, earn higher grades and, appreciating a job well done, rate the course more highly. This is good news for pedagogy, if true. But tough graders tend to side with two other interpretations: in what has become known as the grade attribution theory, students attribute success to themselves and failure to others, blaming the instructor for low marks. In the so-called leniency theory, students simply reward teachers who reward them (not because they're good teachers). In both cases, students deliver less favorable evaluations to hard graders.

University of Washington Study --- http://www.washington.edu/newsroom/news/k120497.html

"Our research has confirmed what critics of student ratings have long suspected, that grading leniency affects ratings. All other things being equal, a professor can get higher ratings by giving higher grades," adds Gillmore, director of the UW's office of educational assessment.

The two researchers' criticisms, which are counter to much prevailing opinion in the educational community, stem from a new study of evaluations from 600 classes representing the full spectrum of undergraduate courses offered at the UW. Their study is described in a paper being published in the December issue of the Journal of Educational Psychology and in two papers published in a special section edited by Greenwald in the November issue of the American Psychologist.

Rutgers University --- http://complit.rutgers.edu/palinurus/ 

An article that drew a lot of responses in the media. Among other things, the author claims that "Some departments shower students with A's to fill poorly attended courses that might otherwise be canceled. Individual professors inflate grades after consumer-conscious administrators hound them into it. Professors at every level inflate to escape negative evaluations by students, whose opinions now figure in tenure and promotion decisions."

Archibold, Randal C. "Just Because the Grades Are Up, Are Princeton Students Smarter?" The New York Times (Feb 18, 1998), Sec: A P. 1.

A long article following a report on Princeton’s grade inflation. Includes a presentation of possible reasons for the phenomenon.

Goldin, Davidson. "In A Change of Policy, and Heart, Colleges Join Fight Against Inflated Grades." The New York Times (Jul 4, 1995), Sec: 1 P. 8. 

The article presents the tendency of elite institutions to follow Stanford and Dartmouth’s lead in fighting Grade Inflation. Brown stands out in refusing the trend by making the transcripts reflect achievements only. The rational: "'When you send in your resume, do you put down all the jobs you applied for that you didn't get?' said Sheila Blumstein, Brown's dean. 'A Brown transcript is a record of a student's academic accomplishments.'"

University of Montana --- http://www.rtis.com/reg/bcs/pol/touchstone/november97/crumbley.htm 

The mid-term removal of a chemistry instructor at the University of Montana in 1995 because he was "too tough" illustrates the widespread grade inflation in the United States. Grade inflation will not diminish until the root cause of grade inflation and course work deflation is eliminated: widespread use of anonymous student evaluations of teaching (SET). If an instructor calls a student stupid by giving low marks, it is unlikely the student will evaluate the instructor highly on an anonymous questionnaire.

 As more and more research questions the validity of summative SET as an indicator of instructor effectively, ironically there has been a greater use of summative SET. A summative SET has at least one question which acts as a surrogate for teaching effectiveness. In 1984, two-thirds of liberal arts colleges were using SET for personnel decisions, and 86% in 1993. Most business schools now use SET for decision making, and 95% of the deans at 220 accredited undergraduate schools "always use them as a source of information," but only 67% of the department heads relied upon them. Use of SET in higher education appears frozen in time. Even though they measured the wrong thing, they linger like snow in a shaded corner of the back yard, refusing to thaw.

 

Mixed opinions voiced in The Chronicle of Higher Education (not usually backed by a formal study) --- http://chronicle.com/colloquy/98/evaluation/re.htm 

CONCLUSIONS

Causes of grade inflation are complex and very situational in terms of discipline, instructor integrity, pedagogy, promotion and tenure decision processes, course demand by students, pressures to retain tuition-paying students, etc.  I suspect that if I dig harder, there will be a few studies attempting to contradict the findings above.  

One type of contradictory study does not impress me on this issue of grade inflation.  That is a study of the instructors rated highest by students, say the top ten percent of the instructors in the college.  Just because some, or even most, of those highly-rated instructs are also hard graders does not get at the root of the problem.  The problem lies with those instructors getting average or below evaluations that see more lenient grading as a way to raise student evaluations.

One thing is absolutely clear in my mind is that teaching evaluations are the major cause of system-wide grade inflation.  My opinion is in part due to the explosion in grade inflation that accompanied the start of anonymous course evaluations being reported to administrators and P&T committees.  In the 1960s and 1970s we had course evaluations in most instances, but these were always considered to be private information owned only by the course instructors who were generally assumed to be professionally responsible enough to seriously consider the evaluation outcomes in private.  

There are no simple solutions to grade inflation.  The Princeton 35% cap on A grades is not a solution if some members of the faculty just refuse to abide by the cap (and faculty are a know to be proudly independent).  Grades are highly motivational and, as such, motivate for different purposes in different situations.  Student evaluations of faculty serve different purposes and, as such, motivate faculty for different purposes in different situations.

I have no solution to recommend at the moment for grade inflation.  But I would like to recommend that my own university, Trinity University, consider adopting an A+ grade with a cap of 10% (not rounded) in each class.  For example, a class with 19 students would be allowed to have one A+ student;  a class with 20 students could have two A+ students.  The A+ would not be factored into the overall gpa, but it would be recorded on a student's transcript.  This would do absolutely nothing to relieve grade inflation.  But it would help to alleviate the problem of having exceptional students in a class lose motivation to strive harder for the top grade.  One of the problems noted in the Duke, Washington, and Rutgers studies is that exceptional students don't strive as hard after they are assured of getting the highest grade possible in the class.  Why not make them strive a little bit harder?

It was just plain tougher in the good old days.  Some sobering percentages about grade inflation --- http://www.cybercollege.com/plume3.htm 

In 1966 at Harvard, 22% of all grades were A's. In 2003, that figure had grown to 46%. In 1968 at UCLA, 22% of all grades were A's. By 2002, that figure was 47%.

The so-called Ivy League schools, MIT, Stanford, and the University of Chicago, averaged 50% A's (in recent years).

The most immediate effect of giving almost 50% A's is that exceptional students see little reason to try to excel. They know they can "coast their way" to an A without really being challenged.

 

Awarding students A's for C+ work robs the best and the brightest.
Prof. Roger Arnold --- http://www.cybercollege.com/plume3.htm  

May 4, 2004 reply from David R. Fordham [fordhadr@JMU.EDU

RIGHT on!

Back when I was program director, it was empirically demonstrable that grade distribution, (as well as time of day, number of empty seats in the classroom, and male-vs. female professor-vs. student, -- all individually, let alone collectively), were able to overpower individual identity when it came to student evaluations of faculty.

I never, ever, referred to them as Student Evaluations of Faculty. I always referred to them as “Student Perceptions”. I used them as ONE (and a minor one at that) of many factors in evaluating faculty. One of the more valid, in my mind, measures of faculty performance is feedback from 5-year+ alums. Although delayed, such feedback says much more about the quality of “education” than anything which could be generated contemporaneously. This is the major reason for my contempt for “assessment programs” of the form in which they are currently being promoted by the Asinine Administrators Compelling Sales of Bullexcrement… (I may not have the full name of the organization completely correct, since they recently changed their official moniker, but I’m hoping everyone will forgive my mistake and go with the acronym.)

As always,

Argumentative, Assertive, Contrary, Scathing, and Bullheaded,

David R. Fordham
PBGH Faculty Fellow
James Madison University

May 4, 2004 reply from Linda Kidwell from the University of Niagara (visiting this year Down Under)

I stumbled into a different approach here in Australia during my visiting year. There are percentage parameters for grade distribution at some universities. For example only a small percentage can be awarded HD (A), and there's a maximum percentage that can receive Fs. There's essentially a bell curve expectation. I had a bit of trouble first term here because my grade distribution was too high for the faculty guidelines.

I have mixed feelings about it. I consider it a violation of academic freedom in part, though perhaps suggested guidelines are good. And if I have a particularly good class, I don't want to artificially lower their grades. On the other hand, it does take some of the grade pressure off -- I never find myself tempted to curve a tough exam, and I don't automatically round upward for those borderline grades. So it's a mixed bag!

What I'd like to see is a bit more concern over the granting of latin honors in the US. When I was a student at Smith, only the top 2 students earned Summa Cum Laude, the next 25 or so got Magna, and next 50 got Cum Laude (I'm guessing at the latter 2, but you get the idea). So you really had to be among the best to earn it. At Niagara, my home institution, it is based on GPA. In business we have tougher grading standards (tougher courses too?) than other areas. As a result, a small percentage of our business students earn latins, but a staggering 70% of the education majors get them. Are all the brilliant students really in the school of education? Every year at commencement the business and arts & science faculty roll their eyes as those honors are announced. I think it cheapens the whole honor, and it is unfair to students in the areas that don't inflate grades. It's also unfair to those education students who really are top-flight.

Linda Kidwell

May 5, 2004 reply from Robert Holmes Glendale College [rcholmes@GLENDALE.CC.CA.US

Some time ago I mentioned to the list that I agreed to meet with some of the students in my on-line course for extra instruction. At least one of you said that since not everyone could come to my office, I was being unfair to the class by allowing the students who could come to my office to have added help. I thought at the time how could I be unfair by helping students? My school does not have a maximum or minimum limit on the number of A's or B's we assign to students. We are expected to assign grades based on mastery of the subject, not by rank in the class. When grades are assigned by rank in the class, then giving one student the benefit of my time and denying it to others is unfair. Those who can come to my office are better able to beat the students who can not come. I do not like the idea of the competitive model. I do not want to frustrate students who are eager for learning because it is not fair to the rest of the class. I would much rather see students helping each other to the benefit of both instead of withholding knowledge in order to beat their classmates. It is probably easier to assign grades when you just add up the points and the first X% get A's and so on, but I would hope most of us know what we want the students to get from our classes, and those who get it should be rewarded and those who don't get it should not be rewarded, no matter how many of each are in a particular class. As the college bound population grows, the "top" schools in the country should be having more high quality applicants to choose from, and they should find that more students are mastering the subject matter, and thus receiving higher grades on average.

May 5, 2004 reply from Bob Jensen

Hi Robert,

As usual, you raised an interesting point.  I think most of us are accustomed to motivating our top students to reach for the stars.  We want to graduate students who can get into the top graduate schools, leading CPA firms, top corporations, etc.  We want to bring honors to our university by watching students get outside honors such as Rhodes Scholarships and medals for CPA examination scores.

One of the best ways to motivate top students is grade competition. Top students generally strive for the top grade in a class and the highest gpa in the college. But they may not strive any harder than it takes to get the top grade in a class, at least that's what the studies from Duke, Washington, and Princeton are telling us.

Now the Australian system that Linda Kidwell describes with a bell-curve grade distribution and a limit of say 2% for that Highest Honors designation is aimed at motivating the best students in the class to obtain the highest honor possible on their transcripts.  These top students work night and day to earn their star designations.

Your grading system is not designed to motivate top students to be highest honor students.  There is no grade incentive for an exceptional student in your class work any harder than it takes to earn the same grade with half the effort that it takes an average student to work extra hours with you for the same A grade.  

But your system may have turned some student's life around, a student who never thought it was possible to earn an A grade in an accounting class.  You have thus met what is probably your main goal as an educator.  And you have not achieved grade inflation by simply dumbing down your course.

I guess what we conclude from your system is that there are different grading scales for different purposes.  Perhaps there is more student objection to grade inflation in the Ivy League schools because these students are reaching for the highest stars required to gain entry into elite graduate programs or some other elitist future where only the highest stars have an entry opportunity.

Your A students, on the other hand, may have a longer-run shot at the top because you helped coax them out of the starting gate.  

I guess I can't find fault with this except that I hope you kick ass when you encounter an exceptional student.


May 5, 2004 reply from Chuck Pier [texcap@HOTMAIL.COM

As a follow-up to my commentary on the number vs. letter grading system, when I first got to Appalachian State I was thrilled that we used the + & - system because I felt I could provide differentiation for the students and not lump the students with a acore of 80 with the students that scored an 89. However, what I have realized as I approach the end of my second year here is that the more divisions we have in the grading scale, the more boundary lines we create. The more boundary lines we create, the more students are disappointed about missing the next level and the more they will ask or pester you to help them. After all, "we are only talking about a point or two!"

This time of the year is always the most stressful for me. Does it get any better after we've been doing it for a while? (One of David's rhetorical questions.) ;>)

Chuck

Charles A. Pier 
Assistant Professor Department of Accounting 
Walker College of Business Appalachian State University 
Boone, NC 28608 
email:
pierca@appstate.edu 

May 7, 2004 reply from Randy Elder [rjelder@SYR.EDU

I've followed the thread on grade inflation with much interest. It is a topic that I have great interest in, and here are some observations.

1. Relation between grades and evaluations - I think that the faculty perception that grades influence evaluations is a much greater problem for grade inflation than the actual relation, which I don't believe is that strong. An even greater problem is that bad teachers use grading difficulty as an excuse for their evaluations.

2. Student evaluations - I also believe that we place way too much reliance on student evaluations. Evaluations aren't going away, but there is minimal effort to evaluate the actual effectiveness of teachers.

3. Grading policies - Some of the discussion has focused on grading on the "curve". I find that professors either grade using some sort of curve, or using a fixed evaluation criteria. I much prefer the latter, as it does not place students into competition with each other. More importantly, it allows students to better know where they stand in the course, and attribute their performance to their own effort. My courses always have a fixed number of points, and I inform students of the minimum cutoffs for each grade level.

4. Sample exams - In the Syracuse University Whitman School of Management, it is policy to make some sample exam material available. The reason is to provide equal access, on the assumption that there are old exams floating around in frat houses. The theory is to give students an idea of the types of questions to be asked. I also encourage students to use it as a diagnostic tool. Unfortunately, I believe most students misuse the sample exams and focus on the answers, rather than the knowledge to be tested.

5. Grading information - At SU, we have historically not made much grading information available, unlike my experience at public universities. We are moving toward much greater availability of this information. I hope that this will eliminate some posturing about grades (prof who claims to be tough but isn't; belief that prof X gets good grades only because he grades easy, etc.) We also hope to provide some grading guidelines that will serve to reduce some grade inflation.

Randy Elder
Associate Professor and Director
Joseph I. Lubin School of Accounting
Martin J. Whitman School of Management
Syracuse University
Syracuse, NY 13244-2130
Email: rjelder@som.syr.edu 
Phone: (315) 443-3359
Fax: (315) 443-5457

After I asked Randy to elaborate on his Point 5 above regarding grading information disclosure, he replied as follows on May 10, 2004:

Bob,

Thanks for the compliment. I wasn't sure that my remarks were that thoughtful as I was reading AECM messages on a LIFO basis and discovered lots more good input on the subject after my post.

We do not make grade information available to students. However, I believe it may be helpful to do so as it eliminates misinformation that is passed around informally and on the web (you might want to check out the site www.ratemysuclass.com). This web site is spreading to other universities.

We make summarized grading information available to department chairs to share with faculty. We have tried to focus on courses by omitting faculty names. The accounting department has established grading guidelines by course level, and I expect the School of Management to do the same in the near future. I emphasize that these are guidelines, and faculty can deviate from them.

I have been a strong advocate of having such policies, and was influenced by my time as a doctoral student at Michigan State, and year visit at Indiana. As a doctoral student, I wanted to make sure that my grading conformed to grading by full-time faculty. I was directed to a file that had a complete grading history for every course. At Indiana, the department shared a 10-year grading history for every course. During my visit at Indiana, the AIS department adopted grading guidelines that we modeled ours after.

Randy

May 11, 2004 reply from Bob Jensen

Hi Randy,

I follow rate-my-class ( http://www.ratemysuclass.com/browse2.cfm?id=111  ) only as a curiosity.

It is an illustration of the evils of self-selection and bias. Some professors actually encourage selected students to send in evaluations. Naturally these tend to be glowing evaluations.

Most courses reviewed suffer from self-selection bias of disgruntled students. Most reviews tend to be negative. The number of students who send in reviews is miniscule relative to the number who take the courses. I mean we're talking about epsilon here!

Disgruntled students also seem to have a competition regarding who can write the funniest disparaging review.

Fortunately, the site seems to be ignored where it counts.

Bob Jensen

May 12, 2004 reply from David R. Fordham [fordhadr@JMU.EDU

Another one is:

www.ratemyprofessor.com 

I use it as an example of how gullible people are... taking Internet sites as Gospel without considering where the data comes from...

David R. Fordham 
PBGH Faculty Fellow 
James Madison University

May 5, 2004 reply from Jagdish Gangolly [JGangolly@UAMAIL.ALBANY.EDU

Bob,

I think it is important to provide incentives to be the best. It is also important to provide incentives to be NOT at the bottom.

In the old days, at Cambridge University, at least in the Mathematical Tripos, the students were graded into four classes: senior wrangler (only one student could be this), wranglers, senior optimes, and junior optimes. During the commencement, the student at the bottom of the totem pole would be required to carry the "wooden spoon" (for a picture of it click on http://www.damtp.cam.ac.uk/user/sjc1/selwyn/mathematics/spoon.html ), to signify that (s)he was good mainly for stirring the oats.

While draconian, the wooden spoon provided sufficient incentives to the students not to be the one to carry it. The tragedy is that nowadays many students might carry it with pride (to be called not-a-geek or nerd).

Jagdish

May 5, 2004 reply from Bob Jensen

Hi Jagdish,

I loved the link at http://www.damtp.cam.ac.uk/user/sjc1/selwyn/mathematics/spoon.html 

But I have one question:

Wooden spoon too quick 
Make student much to sick

Wooden spoon too late 
Make student out of date

Wooden spoon on time 
Make student want to climb

Main question when I teach a goon 
Where is it best to place that spoon?

Thanks,
Bob Jensen

May 5 reply from Jagdish Pathak

I find the very grades by themselves faulty in the scenario of those schools where very best are chosen to be privileged students, viz. ivy league ones. It is absolutely wrong to have more than one grade in such schools in my view. All of us are aware that these schools admit only the top rung of SAT and what value addition is done in four years by the school, if these students come out lesser than 'A' grade?
I believe there is a way to differentiate these all potential 'As' and that is by differentiating 'A' grade itself. The very best or the top 5-10% may automatically would acquire AAA, the major middle group would acquire 'AA' and the rest minority may get 'A'.There can be a theoretical provision for a 'B' or 'F' which will be a 'B' or 'F' like anywhere else and student may attempt in only one additional chance to make it into higher AAA or AA or A grade.

How does it sound? Please forgive me if I have sounded a bit judgmental.

Jagdish Pathak, PhD
Assistant Professor of Accounting Systems
Accounting & Audit Area
Odette School of Business
University of Windsor
401 Sunset
Windsor, N9B 3P4, ON
Canada

May 5, 2004 reply from Bob Jensen

Hi Jagdish,

I think a “rose by any other name is a rose.”

I’m not certain whether AAA/AA/A/B/C/D/F is much different that 6/5/4/3/2/1/0 in the eyes a student in a class.  An ordinal ranking with seven categories is an ordinal ranking with seven categories by any other name.

Other ordinal rankings by any other name may be somewhat different.  Whether ranks have two scales (P/F), three scales (H/M/L), five scales (A/B/C/D/F) or a ranking of N students (1/2/3/…/N) changes the nature of the competition.  The more ranking categories, the more intense the competition becomes to get the highest possible grade.  For example, in the U.S. Military Academy, the top ranking graduates down to the bottom ranking graduates are all determined, and this makes for some intense competition to be the top graduate (although the lower ten prospective graduates may decide to compete in a race to the bottom just for the distinction of being last after earning a decent rank becomes hopeless).

Another problem is one of aggregation across courses.  For example, an ordinal scale of A/B/C/D/F becomes a cardinal scale carried out to two decimal points when we transform a set of grades into a something like a gpa = 3.47.  We have thereby created a cardinal way to rank graduates on a continuum when the inputs to the cardinal outcomes are only ordinal A/B/C/D.F grades for every course.

Students are most interested in how rankings affect them in later life.  For example, suppose Big Four accounting firms will only interview students with a gpa of 3.30 or above.  In that case, weaker students will advocate more grade inflation so they can make the cut.  Top students will advocate grade deflation so that the pool of students having a gpa higher than 3.30 smaller.  For example, suppose grade deflation leaves a pool of 10 qualified graduates whereas grade inflation leaves a pool of 40 qualified graduates.  If only nine winners are going to be chosen from the pool, then top students have better odds with grade deflation.

One problem we are having at the K-12 level, is that students are aspiring for less.  I will forward Steve Curry’s opinion on this.

 Bob Jensen

May 5, 2004 reply from Steve Curry

The five letter grades were supposed to be a scale with C meaning average. A and B were above average, D and F were below average. The youth and college kids I work with at church are not interested in this scale. (Nor the related 100-point scale, nor the 4.0 GPA scale.) The parents want the A, the kids themselves are much more in the pass/fail mindset. It’s like the joke what do you call someone who graduated at the bottom of the class in medical school? Doctor. Whether this is an overall societal trend, I cannot say. It may be useful to find out. If so, our evaluations of them and their evaluations of us need to change.

When the mandatory faculty evaluations were introduced back in 1987, I heard one professor argue that there should only be one question: “Did you learn anything?” From what I’ve seen in the teens I know, this simple evaluation is what they want. When the pass/fail kids become the pass/fail parents and teachers, the various scaled systems may not survive. If change is to occur, it will be long and painful.

Another question arises: How important is evaluation in the first place? Certainly education that is preparing students for life needs to evaluate whether the student has learned what is necessary but what about the part of education that is learning for learning’s sake? Someone who wants to become a banker certainly needs to be taught amortization and there needs to be an evaluation to see if they understand the concept and its application before they are certified. But is it really necessary to evaluate a person who takes a history course simply because they love the story? Evaluating the former is easy. Give them some numbers and see if they get it right (pass/fail). The latter is more difficult. Which details does the instructor think are important? This subjectivity lends itself more to a scaled evaluation but the basic question is if evaluation is even necessary at all. Back to the simple question “Did you learn anything?”

All this may help explain the rise of technical training in our society where you either get the certificate or you don’t. Maybe Career Services may have some insight as to whether campus recruiters even look at the transcript. In my first job out of college, the phone company never requested a transcript, they just asked if I had a degree. Have our recent graduates encountered the same?

That we even have a concern over grade deflation (a few years ago we were discussing grade inflation and the Lake Wobegon Effect) draws into question the credibility of our current evaluation system in the first place. If average truly is average then the average grade should have been, should be, and should always be a C. If it isn’t, this suggests the evaluation system is not accurate or impartial. It also implies it is not fair.

Stephen Curry Stephen.Curry@Trinity.edu
Information Technology Services Phone: 210-999-7445
Trinity University   http://www.trinity.edu\scurry 
One Trinity Place 
San Antonio, Texas 78212-7200

May 5, 2004 response from akonstam@trinity.edu 

I have never understood faculty be interested in having lower grades in the class. Grade inflation might be caused by: 

1. Better students. Should not the better students at Harvard get better grades. When we change our average student SAT from 1000 to 1250 should they not get better grades.

2. Maybe teaching and teaching tools have become more effective.

3. Are all courses equally hard and should they be. Do we really think art courses and calculus courses need to be equally difficult?

With deference to Bob Jensen's studies their are two many variables in producing better grades to pin down the cause effectively.

 Aaron Konstam 
Computer Science Trinity University 
One Trinity Place. San Antonio, TX 78212-7200

May 5, 2004 reply from Bob Jensen

Aaron wrote the following:

****************

1. Better students. Should not the better students at Harvard get better grades. When we change our average student SAT from 1000 to 1250 should they not get better grades.
**************

Hi Aaron,

I think your argument overlooks the fact that the people raising the most hell over grade inflation are the best students currently enrolled in our universities, especially students in the Ivy League universities.  If 50% of the students get A grades at Harvard, the Harvard grade average becomes irrelevant when Harvard graduates are attempting to get into law, medical, and other graduate schools at Harvard and the other Ivy League graduate schools.  Virtually all the applicants have A grades.  Where do admissions gatekeepers go from there in an effort to find the best of the best?

The uproar from top students at Princeton was a major factor leading to Princeton 's decision to put a cap on the proportion of A grades.  

Some years back the Stanford Graduate School of Business succumbed to pressures from top MBA students to cap the highest grades in courses to 15% of each class.  This became known as the Van Horne Cap when I was visiting at Stanford (Jim Van Horne was then the Associate Dean).  The reason the top students were upset by grade inflation was that they were not being recognized as being the best of the best in order to land $150,000 starting salaries in the top consulting firms of the world.  Those consulting firms wanted the top 10% of the graduates tagged "prime-grade" for market by Stanford professors.  (Recruiters also complained that all letters of recommendation, even those for weaker students, were too glowing to be of much use.  This is partly due to fear of lawsuits, but it's also a cop out.)

*******************
And now, a new report prepared by the American Academy of Arts & Sciences says it's time to put an end to grade inflation.

"Deflating the easy 'A'," by Teresa Méndez, Christian Science Monitor, May 4, 2004 --- http://www.csmonitor.com/2004/0504/p12s02-legn.html  
*******************


May 6, 2004 message from Paul Fisher [PFisher@ROGUECC.EDU

The BBC did a small piece on the four-minute mile this morning. It is interesting that 30-40 years ago that barrier was thought to be impossible to break, yet now runners are not considered "world-class" unless they can do so regularly. Does that mean our tracks are shorter? Stopwatches slower?

We should be improving our instructing ability and our students grades should be reflecting that. I know that my courses are taught much better today than twenty years ago, and I would be surprised if any instructor would say that their teaching skills have degraded over the years.

That does not mean I don't see the internal problems with SAT and other measurements that may inhibit student learning, yet maintain instructor status.

Paul

May 6, 2004 reply from Bob Jensen

Hi Paul,

You said: 

***************************** 
"We should be improving our instructing ability and our students' grades should be reflecting that. I know that my courses are taught much better today than twenty years ago, and I would be surprised if any instructor would say that their teaching skills have degraded over the years." 
****************************

Near the bottom of this message you will read a less optimistic quote from Ohio State University: 

*************************** 
The massive number of undergraduates who are effectively illiterate (and innumerate) leads to a general dumbing down of the curriculum, certainly the humanities curriculum. 
***************************

It is absolutely clear that we are not "improving our instructing ability" in K-12 education where our TV-generation graduates are on a race for the bottom and are demonstrating an immense lack of motivation in public schools. They are winning a speed test in terms of hours spent in class (maybe 4-5 hours) per day vis-à-vis my school days when we spent nearly eight hours per day (8:00-12:00 a.m. and 1:00-4:30 p.m.) in class minus two recess breaks.

NB:  
Especially note the last paragraph at the bottom of this message which compares U.S. versus Japanese school children.  The last line reads "A little Japanese respect for hard work might work wonders for this generation of American slackers who refuse to recognize their own ignorance with anything other than praise."      

It is also doubtful for our college graduates when employers tell us how badly communication skills have declined in our graduates, especially grammar and creative writing skills of the TV-generation. I think the media has greatly expanded student superficial knowledge about a lot of things, but so much of it seems so shallow. Ask your college's older writing composition instructors if writing skills have improved over the years? Ask the instructor's in the basic math/stat course if math skills have improved?

I think that more of our graduates might be able to run the four-minute mile, and their term papers may be equally fast-paced Google pastes that set speed records but not quality records.

How well do you think our college graduates would do on this supposed 1895 test for eighth graders --- http://skyways.lib.ks.us/kansas/genweb/ottawa/exam.html 

If you get a chance, compare the reading book currently used in the fifth grade of your school district with the turn-of-the-century McGuffey Reader ---- http://omega.cohums.ohio-state.edu/mailing_lists/CLA-L/1999/12/0092.php 

The recent anecdotes about the inability of undergraduates to read what grade school students used to read before WW II should hardly come as a surprise. The new 1998 NAEP writing assessments, how available at the National Center for Educational Statistics, show in correlation with the reading assessments that the majority of US students lack the skills for reading any advanced literature.

In his press release, Gary W. Phillips, the Acting Commissioner for the NCES, stated that the average or typical US student is not a proficient writer (where "proficient" is a descriptive skill category of the NAEP) and has only partial mastery of the knowledge and skills required for solid academic performance in writing. This is true, he noted, at the national level for all three grades (4th, 8th and 12th). Only 25% had reached the proficient achievement level, while a mere 1% in each grade had reached the advanced achievement level. I note that the skills required for basic, proficient and advanced are very generous. By the English standards of a century ago, "advanced" would probably not even qualify for "basic."

Here is a summary of the percentage of students at or above each achievement level by gender:

Gender Advanced Proficient Basic

Male 0 14 70 Female 1 29 86.

The discrepancy between male and female proficiency should ring alarm bells throughout the educational world. The gap here nearly guarantees poor male performance at the university. As a gross description, the data show that 23-38 percent of US students fall below grade level in writing. If one compares the writing assessments with the reading assessments, a fairly close correspondence between the two is evident. Here is a summary of the percentage of students at or above each achievement level in reading by year of assessment:

Year Advanced Proficient Basic

98 6 40 77 94 4 36 75 92 4 40 80.

What this tells us is what everyone who teaches writing knows quite well: writing is a form of book talk. Failure in reading assures failure in writing.

It is, as a consequence, hopeless to tackle the writing problem without first solving the reading problem. Indeed, I'm quite confident that a massive improvement in reading skills would, by itself, produce a significant improvement in writing skills. The NAEP assessments suggest modest improvement in reading at the fourth grade level (though skewed by the failure of some states to include the results from students with learning disabilities), but they are far too small for the enormous amount of money that has been spent to improve the skill. Since private schools consistently outperform public schools by a large margin at all grade levels in both reading and writing assessments, there are clear advantages in relative freedom from the educational bureaucracy and greater control over discipline and content. It is very unlikely, in my opinion, that the public schools will ever work very well unless the socio-economic disparity between the poor and the middle class (shrinking though it is) can be eliminated or at least reduced. The NAEP results show another important correspondence, that between parental education and writing skill. Parents with a college degree impart more social capital--including discipline and higher expectations--to their children than parents with only a high school degree or no degree.

The massive number of undergraduates who are effectively illiterate (and innumerate) leads to a general dumbing down of the curriculum, certainly the humanities curriculum. Heroic efforts must be made simply to convey the semantic meaning of a passage children once read in McGuffy's Reader. A healthy respect for their own deficiencies coupled with the will to learn and a relentless courage to fight through to understanding would help these weak students enormously. Unfortunately, a very large proportion are simply disengaged from any kind of serious, disciplined and steady application to studies as a study by UCLA's Higher Education Research Institute shows (_The American Freshman: National Norms for Fall 1995_, ed. Sax et al. (Los Angeles: HERS, 1995)). More and more students entering college have spent less time at homework than ever before, talked less to teachers outside class, participated less actively in clubs and visited a teacher's home less frequently. They want everything presented to them in an easily graspable, attractive package--like a TV sitcom. Many claim to be bored in class and are hostile to long or complex reading assignments (whole classes indeed will revolt on occasion), but expect good grades for mediocre work. The alienated and disengaged are often proud of their ignorance. A student who claims to have read all of Othello I.i, which is after all a very modest assignment, without understanding a word of it has not availed himself of a good annotated edition, of dictionaries and of references works. He also lacks a decent sense of shame. More significantly, he hasn't displayed the will to keep working at the scene until some understanding breaks clear. 

In all my years of teaching Shakespeare at the undergraduate and graduate levels, as in my years teaching him in high school, I never encountered such a completely blank mind. Certainly not in Japan, where I'm currently teaching a seminar in Shakespeare with students who labor unremittingly to follow the syntax and meaning. A little Japanese respect for hard work might work wonders for this generation of American slackers who refuse to recognize their own ignorance with anything other than praise.

 


As far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality.
Albert Einstein.
I suspect this quote could easily be modified to apply to academic accounting research.

How could a school district be unaware of such an important law?  The law itself is probably a poor law that will ultimately turn the Algebra course into a color-the-equation course for students not bound for college.  The fact of the matter is that the algebra coloring books could just not be printed by the time the law went into effect.

May 2, 2004 message from Dr. Mark H. Shapiro [mshapiro@irascibleprofessor.com

The Los Angeles Times recently reported that some 200 school districts in California had been granted waivers from the new graduation requirement that compels every high school student in the "golden state" to pass Algebra 1 before receiving his or her diploma. The school districts that were granted waivers complained that they were unaware of the new law, and that it would be unfair to penalize their students who were about to graduate because of the failings of these districts. 

For students not interested in going on to college, wouldn’t it be better to substitute the Algebra course for a course combining Excel financial functions with the basic mathematics of finance so that students would understand how interest rates are calculated on loans and the basics of how they might be cheated by lenders, investment advisors (read that mutual fund advisors), and employers? For those students, the best thing they could learn in my opinion is at http://www.trinity.edu/rjensen/FraudDealers.htm
The course could also include some basic income tax fundamentals like interest and property tax deductions and the calculations of after-tax costs of home ownership and the senseless cost of purchasing vehicles you cannot afford.

Students who change their minds, after graduation, and decide to go on to college will just have to pick up the Algebra later on when they have perhaps matured enough to see some relevance of algebra and other mathematics courses in their education.  I was an Iowa farm boy who did not take calculus, linear algebra, differential equations, finite mathematics, and mathematical programming until I was in a doctoral program.  This turned out to be a brilliant move, because I looked like a genius to some of my competitors in the program who forgot much of the mathematics they studied years earlier and had long forgotten.  For example, one of our statistics qualifying examination questions in the doctoral program required integrating the normal distribution (not an easy thing to do) by shifting to polar coordinates.  I looked brilliant because I’d only recently learned how to integrate with polar coordinates.  My engineering counterparts had long forgotten about polar coordinates --- http://mathworld.wolfram.com/PolarCoordinates.html

But please, please do not ask me anything today about polar coordinates?  Many things learned in doctoral programs are not relevant to life later on.

Bob Jensen

May 3, 2004 reply from Patricia Doherty [pdoherty@BU.EDU

-----Original Message----- 
From: Patricia Doherty 
Sent: Monday, May 03, 2004 9:12 AM 
Subject: Re: Mathematics versus Reality versus Curriculum

"…wouldn't it be better to substitute the Algebra course for a course combining Excel financial functions with the basic mathematics of finance so that students would understand how interest rates are calculated on loans and the basics of how they might be cheated by lenders, investment advisors (read that mutual fund advisors), and employers? …"

In order to understand these, a student needs many of the concepts taught in Algebra I, such as the way equations work. Algebra I is really a pretty basic math course where they spend a lot of the first months reviewing basic math like fractions and decimals. These seem to me like things students need to understand spreadsheets and compound interest. Perhaps a DIFFERENT algebra course should be offered for those who are college-bound, and those who may not be. The latter would take a course more oriented to the "practical" needs you cite, whereas the former (who also, by the way, need these things) would take a more challenging, accelerated course, more along the lines of the Algebra I you are probably thinking of.

p

I love being married. It's so great to find that one special person you want to annoy for the rest of your life. Author unknown.

Patricia A. Doherty 
Instructor in Accounting Coordinator, 
Managerial Accounting 
Boston University School of Management 
595 Commonwealth Avenue Boston, MA 02215

May 3, 2004 reply from Bob Jensen

Hi Pat,

Actually, I found that by using Excel's financial functions my students grasp the concepts and the models before they learn about the underlying equations. They are deriving amortization schedules and checking out automobile financing advertisements long before they must finally study the underlying mathematical derivations.

When we eventually derive the equations, the mathematics makes more sense to the students. Sometimes they claim that they understood it better before learning about the math. It's a little like learning to appreciate poetry before delving into such things as meter and iambic pentameter --- http://www.sp.uconn.edu/~mwh95001/iambic.html 

I'm not sure at the first-course level in high school that it is really necessary to delve under the hood and understand the equations like we teach them in college. I certainly don't think that many high school students who never intend to go to college get much out of learning how to solve quadratic equations and other topics in Algebra 1. They have less interest because they don't see much use to them unless they are proceeding on to calculus and college.

Thanks,

Bob

May 4, 2004 reply from Gadal, Damian [DGADAL@CI.SANTA-BARBARA.CA.US

-----Original Message----- 
From: Gadal, Damian 
Sent: Tuesday, May 04, 2004 9:07 AM 
Subject: Re: Mathematics versus Reality versus Curriculum

I thought about this most of last night, and what I've been advocating is not failing our youth. That to me means not dumbing down our education system.

The car analogy doesn't work for me, as cars were engineered with end-users in mind, as were phones, computers, radios, televisions, etc.

I don't think we should put the roof on the house before building the foundation.

DPG 
Waterfront Accounting

May 4 reply from Bob Jensen

I think the real distinction is whether you think failure to require Algebra I for all students is necessarily dumbing down the entire education system. Many nations (especially in Germany and Japan) have flexible educational curricula to serve different needs of different students.

Alternative curricula may be equally challenging without being a "dumbing down."  Dumbing down arises when a course in any given curriculum is made easier and easier just so more students can pass the course.

Having alternative courses is not in and of itself a "dumbing down." For example, replacing Algebra I with "foundations of the mathematics of finance" or "foundations of music composition" would not necessarily be "dumbing down." Dumbing down any given course means taking the hard stuff out so that more students can pass. Replacing one hard course with another hard course is not dumbing down and may improve education because the alternate curriculum is more motivating to the student.

If you want to read more about how to "dumb down" math couses, go to http://www.intres.com/math/ 

*********************************************** 
The Old Adobe Union School District in Petaluma, California has adopted a new math program: MathLand. The net result of this action is to dumb-down the math curriculum and turn the math program into a math appreciation program. This site is dedicated to informing parents in Petaluma, California about the issues involved.

Children grow older and the protest continues against the use of the CPM Algebra I program being used at Kenilworth Junior High of the Petaluma Joint Unified District. This program is so deficient it doesn't cover even half of the California State Content Standards for Algebra I.
**************************************

 

May 2, 2004 reply from Michael O'Neil, CPA Adjunct Prof. Weber [Marine8105@AOL.COM

As a teacher of Algebra A (yes, Algebra A: the first half of Algebra I) I can tell you that you do not even know how bad it is in public schools. I am also a CPA and teach an accounting and consumer finance class in high school. Yes, I fail most of my students. Most of my Algebra A students have already failed Pre-Algebra. They are very lazy, and given their low academic level, many of them are discipline problems.

Despite having standards and trying to TEACH them the material I was not given tenure and then told flat out by the principal (a young man with little teaching experience) that he did not have to give me a reason, and he would not give me a reason. This despite my yearly evaluation having no negative areas--satisfactory in all areas.

California will let schools use accounting as a math class but will not give me credit toward my Math credential. So in theory it might be that in a school accounting would be a 12th grade class, and I would not be able to teach it, despite a MPAcc and CPA.

It will be interesting when schools show a high pass rate in Algebra I and no correlation to the Exit exam.

Mike ONeil

May 5, 2004 reply from XXXXX

I won't even start the story of what the Headmaster told me about the Cs in my Spanish class I gave to three students missing most of the semester due to their parents' taking them on repeated ski trips to Colorado and the students not only not turning in assigned-the-week-before homework, but clearly (matching their tests to the key) failing two of the three exams in the class. My Cs were not even honest in regards to cumulative work done, and pushing the packet. 

These students, according to the Headmaster, needed at least Bs in the class, for reasons I did not need to know. I discovered, after that reason was given that these parents were funders of the new gym and were pledged to give more. Keep in mind that this private school, in (City X), was and still is known for having more students test higher on SATs than other private schools in town. This school also requires 5 years of Latin to get out, and it's a joke to see the helpless ones struggle with Latin the first time (of course never having taken a foreign language in school before) when their rich parents transfer them in from other private schools or HISD to begin to learn Latin and keep a required B in those classes to graduate. 

Their parents whine that the kids are having too much homework, etc. What a mess. And that was one of the very best schools (City X) had/has to offer. I, needless to say, did not return to teach there the next year. And to teach in HISD, although teachers are needed, requires a handgun license and proficiency in martial arts as well as private bodyguard just to be defended against the classroom population. This week's NewYorker has such a cartoon (copy over at the library; hysterical). 

Bob, thanks for letting me vent here. Community colleges offer some hope, but there is such a time delay because of remedial work needed. Home schooling early might work in some cases. And to think these people are our country's future leaders. In closing, I certainly know that it is more difficult to learn as an adult than as a child or adolescent...

Happy Wednesday...

Very best, 
XXXXX


As I said previously, great teachers come in about as many varieties as flowers.  Click on the link below to read about some of the varieties recalled by students from their high school days.  I t should be noted that "favorite teacher" is not synonymous with "learned the most."  Favorite teachers are often great at entertaining and/or motivating.  Favorite teachers often make learning fun in a variety of ways.  

However, students may actually learn the most from pretty dull teachers with high standards and demanding assignments and exams.  Also dull teachers may also be the dedicated souls who are willing to spend extra time in one-on-one sessions or extra-hour tutorials that ultimately have an enormous impact on mastery of the course.  And then there are teachers who are not so entertaining and do not spend much time face-to-face that are winners because they have developed learning materials that far exceed other teachers in terms of student learning because of those materials.  

The recollections below tend to lean toward entertainment and "fun" teachers, but you must keep in mind that these were written after-the-fact by former high school teachers.  In high school, dull teachers tend not to be popular before or after the fact.  This is not always the case when former students recall their college professors.


"'A dozen roses to my favorite teacher," The Philadelphia Enquirer, November 30, 2004 --- http://www.philly.com/mld/inquirer/news/special_packages/phillycom_teases/10304831.htm?1

 

What works in education?

Perhaps Colleges Should Think About This

"School Ups Grade by Going Online," by Cyrus Farivar, Wired News, October 12, 2004 --- http://www.wired.com/news/culture/0,1284,65266,00.html?tw=newsletter_topstories_html 

Until last year, Walt Whitman Middle School 246 in Brooklyn was considered a failing school by the state of New York.

But with the help of a program called HIPSchools that uses rapid communication between parents and teachers through e-mail and voice mail, M.S. 246 has had a dramatic turnaround. The premise behind "HIP" comes from Keys Technology Group's mission of "helping involve parents."

The school has seen distinct improvement in the performance of its 1300 students, as well as regular attendance, which has risen to 98 percent (an increase of over 10 percent) in the last two years according to Georgine Brown-Thompson, academic intervention services coordinator at M.S. 246.

Continued in the article

 


Work Experience Substitutes for College Credits

Bob Jensen cannot support an initiative to grant college credit for work experience
The proposal also said Pennsylvania officials would explore the creation of a centralized body that would try to commonly assess and define what kinds of work experience should qualify for credit, to ease the transfer of credit for such work among colleges in the commonwealth . . . Peter Stokes, executive vice president at Eduventures, an education research firm, agreed that policies that make it easier for workers to translate their previous work experience into academic credit can go a long way in encouraging mid-career workers who might be daunted by the prospect of entering college for the first time. “For someone who’s been in the work force for 10 or 15 years, it can be a lot less scary if the college or university you’re enrolling in can tell you that you’re already halfway there, or a third of the way there,” Stokes said.
Doug Lederman, "Work Experience for College Credit," Inside Higher Ed, August 14, 2006 --- http://www.insidehighered.com/news/2006/08/14/pennsylvania
An Old Fudd's Comment
Everybody has life experience, much of which may be more educational than passage of college courses or studying for qualification examinations. I just don't think it's possible to fairly assess this without at least having qualifying examinations for waiving courses. It may be possible to have qualifying examinations that allow certain courses to be replaced by other courses in a curriculum plan that recognizes that a student has sufficient knowledge for advanced courses. Maybe I'm just old fashioned, but I think that the total number of course credits required for a degree should be lowered by life experience or qualifying examinations. Students should earn their credits in onsite or online courses that, hopefully, entail interactive learning between students and both instructors and other students.

Bob Jensen's threads on higher education controversies are at http://www.trinity.edu/rjensen/HigherEdControversies.htm


Certification Examinations

Certification Examinations Serve Two Purposes:  One is to screen for quality and the other is to put up a barrier to entry to keep a profession from being flooded

The California test (BAR exam for lawyers), by all accounts, is tough. It lasts three days, as compared with two or 2½-day exams in most states. Only one state -- Delaware -- has a higher minimum passing score. According to the National Conference of Bar Examiners, just 44% of those taking the California bar in 2004 passed the exam, the lowest percentage in the country, versus a national average of 64% . . . Critics say the test is capricious, unreliable and a poor measure of future lawyering skills. Some also complain that California's system serves to protect the state's lawyers by excluding competition from out-of-state attorneys. There has been some loosening of the rules. California adopted rules last year permitting certain classes of lawyers to practice in the state without having to take the bar.
"Raising the Bar: Even Top Lawyers Fail California Exam," by James Bandler and Nathan Koppel, December 5, 2005; Page A1 --- http://online.wsj.com/article/SB113374619258513723.html?mod=todays_us_page_one

Jensen Comment:
Unlike the BAR exam, the CPA examination is a national examination with uniform grading standards for all 50 states, even though other licensure requirements vary from state to state.  Also the CPA examination allows students to pass part of the exam while allowing them to retake other parts on future examinations.  Recently the CPA examination became a computerized examination (will both objective and essay/problem components).  This may change performance scores somewhat relative to the data presented below.

You can read the following at http://www.cpaexcel.com/candidates/performance.html

National Average Pass Rates
The National Association of State Boards of Accountancy (NASBA) publishes an Annual Report Entitled "Candidate Performance on the Uniform CPA Examination." Annual data since 1998 typically showed that, for each exam held since that year:

Student Pass Rates at Top Colleges, per NASBA, May 2004 Edition:

 


Life in Our Litigious Society
If attendance alone does not guarantee a passing grade, sue the school?

This is from Karen Alpert's FinanceMusings Blog on August 23, 2006 --- http://financemusings.blogspot.com/

Finally, I'd like to mention a piece from Online Opinion about education as a consumer good. It talks about a legal settlement between a secondary school in Melbourne and the parents of a student who did not learn to read properly.
 

Those in the know have warned that this case could result in an education system burdened by increased litigation by parents against schools, with schools having to be very careful about how they promote their standard of teaching to parents of future students. Not only does the case highlight that education is becoming an area of focus in an increasingly litigious society, but that on a broader level education - at whatever level - has become little more than a product for sale in the market for knowledge and training.

While the case at hand involved a secondary school, I can easily see it applied to tertiary institutions; especially in the case of full fee paying students. Some students already seem to think that attendance should guarantee a passing grade. While I believe that certain pedagogical standards must be met, students must participate in their own education. Those who are not willing to work toward understanding and learning should not be handed a degree. (Say what?)

Jensen Comment
I think Karen's a party poop!

Bob Jensen's threads on higher education controversies are at http://www.trinity.edu/rjensen/HigherEdControversies.htm

 


Peer Review in Which Reviewer Comments are Shared With the World

I think this policy motivates journal article referees to be more responsible and accountable!

Questions
Is this the beginning of the end for the traditional refereeing process of academic journals?
Could this be the death knell of the huge SSRN commercial business that blocks sharing of academic working papers unless readers and libraries pay?

"Nature editors start online peer review," PhysOrg, September 14, 2006 --- http://physorg.com/news77452540.html

Editors of the prestigious scientific journal Nature have reportedly embarked on an experiment of their own: adding an online peer review process.

Articles currently submitted for publication in the journal are subjected to review by several experts in a specific field, The Wall Street Journal reported. But now editors at the 136-year-old Nature have proposed a new system for authors who agree to participate: posting the paper online and inviting scientists in the field to submit comments approving or criticizing it.

Although lay readers can also view the submitted articles, the site says postings are only for scientists in the discipline, who must list their names and institutional e-mail addresses.

The journal -- published by the Nature Publishing Group, a division of Macmillan Publishers Ltd., of London -- said it will discard any comments found to be irrelevant, intemperate or otherwise inappropriate.

Nature's editors said they will take both sets of comments -- the traditional peer-review opinions and the online remarks -- into consideration when deciding whether to publish a study, The Journal reported.

 

October 5, 2006 message from Carolyn Kotlas [kotlas@email.unc.edu]

NEW TAKE ON PEER REVIEW OF SCHOLARLY PAPERS

The Public Library of Science will launch its first open peer-reviewed journal called PLoS ONE which will focus on papers in science and medicine. Papers in PLoS ONE will not undergo rigorous peer review before publication. Any manuscripts that is deemed to be a "valuable contribution to the scientific literature" can be posted online, beginning the process of community review. Authors are charged a fee for publication; however, fees may be waived in some instances. For more information see http://www.plosone.org/.

For an article on this venture, see: "Web Journals Threaten Peer-Review System" By Alicia Chang, Yahoo! News, October 1, 2006 --- http://news.yahoo.com/s/ap/20061001/ap_on_sc/peer_review_science


A New Model for Peer Review in Which Reviewer Comments are Shared With the World
Peer Reviewers Comments are Open for All to See in New Biology Journal

From the University of Illinois Issues in Scholarly Communication Blog, February 15, 2006 --- http://www.library.uiuc.edu/blog/scholcomm/

BioMed Central has launched Biology Direct, a new online open access journal with a novel system of peer review. The journal will operate completely open peer review, with named peer reviewers' reports published alongside each article. The author's rebuttals to the reviewers comments are also published. The journal also takes the innovative step of requiring that the author approach Biology Direct Editorial Board members directly to obtain their agreement to review the manuscript or to nominate alternative reviewers. [Largely taken from a BioMed Central press report.]

Biology Direct launches with publications in the fields of Systems Biology, Computational Biology, and Evolutionary Biology, with an Immunology section to follow soon. The journal considers original research articles, hypotheses, and reviews and will eventually cover the full spectrum of biology.

Biology Direct is led by Editors-in-Chief David J Lipman, Director of the National Center Biotechnology Information (NCBI), a division of the National Library of Medicine (NLM) at NIH, USA; Eugene V Koonin, Senior Investigator at NCBI; and Laura Landweber, Associate Professor at Princeton University, Princeton, NJ, USA.

For more information about the journal or about how to submit a manuscript to the journal, visit the Biology Direct website --- http://www.biology-direct.com/

Bob Jensen's threads on peer review controversies are at
http://www.trinity.edu/rjensen/HigherEdControversies.htm#PeerReview


Differences between "popular teacher"
versus "master teacher"
versus "mastery learning"
versus "master educator."

Teaching versus Research versus Education

October 24, 2007 message from XXXXX

Bob,

I'm writing this to get your personal view of the relationship between teaching and research? I think there's lots of ways to potentially answer this question, but I'm curious as to your thoughts.

October 27, 2007 reply from Bob Jensen

Hi XXXXX,

Wow! This is a tough question!.
Since I know you're an award-winning teacher, I hope you will identify yourself on the AECM and improve upon my comments below.

Your question initially is to comment on the relation between teaching and research. In most instances research at some point in time led to virtually everything we teach. In the long-run research thus becomes the foundation of teaching. In the case of accounting education this research is based heavily on normative and case method research. Many, probably most, accountics researchers are not outstanding teachers of undergraduate accounting unless they truly take the time for both preparation and student interactions. New education technologies may especially help these researchers teach better. For example, adding video such as the BYU variable speed video described below may replace bad lecturing in live classes with great video learning modules.

Similarly, master teachers and master educators are sometimes reputed researchers, but this is probably the exception rather than the rule. Researchers have trouble finding the time for great class preparation and open-door access.

 

********************

Firstly your question can be answered at the university-wide level where experts think that students, especially undergraduate students, get short changed by research professors. Top research professors sometimes only teach doctoral students or advanced masters students who are already deemed experts. Research professors often prefer this arrangement so that they can focus upon there research even when "teaching" a tortured   esoteric course. Undergraduate students in these universities are often taught by graduate student instructors who have many demands on their time that impedes careful preparation for teaching each class and for giving students a lot of time outside of class.

Often the highest ranked universities are among the worst universities in terms of teaching.  See http://www.trinity.edu/rjensen/HigherEdControversies.htm#DoNotExcel

When top researchers are assigned undergraduate sections, their sections are often the least popular. A management science professor years ago (a top Carnegie-Mellon graduate) on the faculty at Michigan State University had no students signing up for his elective courses. When assigned sections of required courses, he only got students if students had no choice regarding which section of a course they were forced into by the department head. This professor who was avoided by students at almost all costs was one of the most intelligent human beings I ever met in my entire life.

One of the huge problems is that research professors give more attention to research activities than day-to-day class preparation. Bad preparation, in turn, short changes students expecting more from teachers. I've certainly experienced this as a student and as a faculty member where I've sometimes been guilty of this as I look back in retrospect. A highly regarded mathematics researcher at Stanford years ago had a reputation of being always unprepared for class. He often could not solve his own illustrations in class, flubbed up answering student questions, and confused himself while lecturing in a very disjointed and unprepared manner. This is forgivable now an then, but not repeatedly to a point where his campus reputation for bad teaching is known by all. Yet if there was a Nobel Prize for mathematics, he would have won such a prize. John Nash (the "Beautiful Mind" at Princeton University who did win a Nobel Prize in economics) had a similar teaching reputation, although his problems were confounded by mental illness.

Then again, sometimes top researchers, I mean very top award-winning researchers, are also the master teachers. For example, Bill Beaver, Mary Barth, and some other top accounting research professors repeatedly won outstanding teaching awards when teaching Stanford's MBA students and doctoral students. I think in these instances, their research makes them better teachers because they had so much leading edge material to share with students. Some of our peers are just good at anything they seriously undertake.

But when it gets down to it, there's no single mold for a top teacher and a top educator. And top educators are often not award-sinning teachers. Extremely popular teachers are not necessarily top educators --- http://www.trinity.edu/rjensen/assess.htm#Teaching

In fact, some top educators may be unpopular teachers who get relatively low student evaluations. In a somewhat analogous manner, the best physicians may get low ratings from patients due to abrupt, impersonal, and otherwise lousy bedside manners. Patients generally want the best physicians even when bedside manners are lousy. This is not always the case with students. For example, an educator who realizes that student learn better when they're not spoon fed and have to work like the little red hen (plant the seed, weed the field, fend off the pests, harvest the grain, mill the grain, and bake their own meals) prefer their fast-food instructors, especially the easy grading fast food instructors.

********************

Secondly your question can be answered at an individual level regarding what constitutes a master educator or a master teacher. There are no molds for such outstanding educators. Some are great researchers as well as being exceptional teachers and/or educators. Many are not researchers, although some of the non-researchers may be scholarly writers.

Some pay a price for devoting their lives to education administration and teaching rather than research. For example, some who win all-campus teaching awards and are selected by students and alumni as being the top educators on campus are stuck as low paying associate professorship levels because they did not do the requisite research for higher level promotions and pay.

Master Educators Who Deliver Exceptional Courses or Entire Programs
But Have Little Contact With Individual Students

Before reading this section, you should be familiar with the document at http://www.trinity.edu/rjensen/assess.htm#Teaching

Master educators can also be outstanding researchers, although research is certainly not a requisite to being a master educator. Many master educators are administrators of exceptional accounting education programs. They're administrative duties typically leave little time for research, although they may write about education and learning. Some master educators are not even tenure track faculty.

What I've noticed in recent years is how technology can make a huge difference. Nearly every college these days has some courses in selected disciplines because they are utilizing some type exciting technology. Today I returned from a trip to Jackson, Mississippi where I conduced a day-long CPE session on education technology for accounting educators in Mississippi (what great southern hospitality by the way). So the audience would not have to listen to me the entire day, I invited Cameron Earl from Brigham Young University to make a presentation that ran for about 90 minutes. I learned some things about top educators at BYU, which by the way is one of the most respected universities in the world. If you factor out a required religion course on the Book of Mormon, the most popular courses on the BYU campus are the two basic accounting courses. By popular I mean in terms of thousands of students who elect to take these courses even if they have no intention of majoring in business or economics where these two courses are required. Nearly all humanities and science students on campus try to sign up for these two accounting courses.

After students take these two courses, capacity constraints restrict the numbers of successful students in these courses who are then allowed to become accounting majors at BYU. I mean I'm talking about a very, very small percentage who are allowed to become accounting students. Students admitted to the accounting program generally have over 3.7 minimum campus-wide grade averages.

This begs the question of what makes the two basic accounting courses so exceptionally popular in such a large and prestigious university?

Trivia Question
At BYU most students on campus elect to take Norman Nemrow's two basic accounting courses. In the distant past, what exceptional accounting professor managed to get his basic accounting courses required at a renowned university while he was teaching these courses?

Trivia Answer
Bill Paton is one of the all-time great accounting professors in history. His home campus was the University of Michigan, and for a period of time virtually all students at his university had to take basic accounting (or at least so I was told by several of Paton's former doctoral students). Bill Paton was one of the first to be inducted into the Accounting Hall of Fame.

As an aside, I might mention that I favor requiring two basic accounting courses for every student admitted to a college or university, including colleges who do not even have business education programs.

But the "required accounting courses" would not, in my viewpoint, be a traditional basic accounting courses. About two thirds or more of these courses should be devoted to personal finance, investing, business law, tax planning. The remainder of the courses should touch on accounting basics for keeping score of business firms and budgeting for every organization in society.

At the moment, the majority of college graduates do not have a clue about the time value of money and the basics of finance and accounting that they will face the rest of their lives.

 

There are other ways of being "mastery educators" without being master teachers in a traditional sense. Three professors of accounting at the University of Virginia developed and taught a year-long intermediate accounting case where students virtually had to teach themselves in a manner that they found painful and frustrating. But there are metacognitive reasons where the end result made this year-long active learning task one of the most meaningful and memorable experiences in their entire education --- http://www.trinity.edu/rjensen/265wp.htm
They often painfully grumbled with such comments as "everything I'm learned in this course I'm having to learn by myself."

You can read about mastery learning and all its frustrations at http://www.trinity.edu/rjensen/assess.htm#Teaching 

 

Master Teachers Who Deliver Exceptional Courses
But Have Little Contact With Individual Students

Before reading this section, you should be familiar with the document at http://www.trinity.edu/rjensen/assess.htm#Teaching

Master teachers can also be outstanding researchers, although research is certainly not a requisite to being a master teacher. Some, not many, master teachers also win awards for leading empirical and analytical research. I've already mentioned Bill Beaver and Mary Barth at Stanford University. One common characteristic is exceptional preparation for each class coupled with life experiences to draw upon when fielding student questions. These life experiences often come from the real world of business apart from the more narrow worlds of mathematical modeling where these professors are also renowned researchers.

Frequently master teachers teach via cases and are also known as exceptional case-method researchers and writers of cases. The Harvard Business School every year has some leading professors who are widely known as master teachers and master researchers. Michael Porter may become one of Harvard's all time legends. Some of the current leading master teachers at Harvard and elsewhere who consistently stand head and shoulders above their colleagues are listed at http://rakeshkhurana.typepad.com/rakesh_khuranas_weblog/2005/12/index.html

Some of the all-time great case teachers were not noted researchers or gifted case writers. Master case teachers are generally gifted actors/actresses with carefully prepared scripts and even case choreographies in terms of how and were to stand in front of and among the class. The scripts are highly adaptable to most any conceivable question or answer given by a student at any point in the case analysis.

Most master case teachers get psyched up for each class. One of Harvard's all time great case teachers, C. Roland (Chris) Christensen, admitted after years of teaching to still throwing up in the men's room before entering the classroom.

In some of these top case-method schools like the Harvard Business School and Darden (University of Virginia) have very large classes. Master teachers in those instances cannot become really close with each and every student they educate and inspire.

Some widely noted case researchers and writers are not especially good in the classroom. In fact I've known several who are considered poor teachers that students avoided whenever possible even thought their cases are popular worldwide.

Open-Door Master Teachers Who Have Exceptional One-On-One Relations With Students

Not all master teachers are particularly outstanding in the classroom. Two women colleagues in my lifetime stand out as open-door master teachers who were prepared in class and good teachers but were/are not necessarily exceptional in classroom performances. What made them masters teachers is exceptional one-on-one relations with students outside the classroom. These master teachers were exceptional teachers in their offices and virtually had open door policies each and every day. Both Alice Nichols at Florida State University and Petrea Sandlin at Trinity University got to know each student and even some students' parents very closely. Many open-door master teachers' former students rank them at the very top of all the teachers they ever had in college. Many students elected to major in accounting because these two women became such important parts of their lives in college.

But not all these open-door master teachers are promoted and well-paid by their universities. They often have neither the time nor aptitude for research and publishing in top academic journals. Sometimes the university bends over backwards to grant them tenure but then locks them in at low-paying associate ranks with lots of back patting and departmental or campus-wide teaching awards. Some open-door master teachers never attain the rank and prestige of full professor because they did not do enough research and writing to pass the promotion hurdles. Most open-door master teachers find their rewards in relations with their students rather than relations with their colleges.

Sometimes master teachers teach content extremely well without necessarily being noted for the extent of coverage. On occasion they may skip very lightly over some of the most difficult parts of the textbooks such as the parts dealing with FAS 133, IAS 39, and FIN 46. Sometimes the most difficult topics to learn make students frustrated with the course and the instructor who nevertheless makes them learn those most difficult topics even when the textbook coverage is superficial and outside technical learning material has to be brought into the course. Less popular teachers are sometimes despised taskmasters.

Your question initially was to comment on the relation between teaching and research. In most instances research at some point in time led to virtually everything we teach. In the long-run research thus becomes the foundation of teaching. In the case of accounting education this research is based heavily on normative and case method research. Many, probably most, accountics researchers are not outstanding teachers of undergraduate accounting unless they truly take the time for both preparation and student interactions. New education technologies may especially help these researchers teach better. For example, adding video such as the BYU variable speed video described above may replace bad lecturing in live classes with great video learning modules.

Similarly, master teachers and master educators are sometimes reputed researchers, but this is probably the exception rather than the rule. Researchers have trouble finding the time for great class preparation and open-door access.

And lastly, accountics researchers research in accounting has not been especially noteworthy, apart from case-method research, in providing great teaching material for our undergraduate and masters-level courses. If it was noteworthy it would have at least been replicated --- http://www.trinity.edu/rjensen/theory01.htm#Replication
If it was noteworthy for textbooks and teaching, practitioners would be at least interested in some of it as well --- http://www.trinity.edu/rjensen/theory01.htm#AcademicsVersusProfession

 

"‘Too Good’ for Tenure?" by Alison Wunderland (pseudonym), Inside Higher Ed, October 26, 2007 --- http://www.insidehighered.com/views/2007/10/26/wunderland

But what most small colleges won’t tell you — not even in the fine print — is that teaching and students often really don’t come first. And for the professors, they can’t. Once upon a time teaching colleges taught and research institutions researched. But these days, with the market for students competitive, and teaching schools scrambling for recognition, they have shifted their priorities. Now they market what is measurable — not good teaching, but big names and publications. They look to hire new faculty from top research universities who will embellish the faculty roster and bring attention to the school by publishing. And they can do this, because even job candidates who don’t really want to be at places like Rural College (although it is ranked quite well) are grateful to get a tenure-track position.

And here is where the problem is compounded. Small schools want books instead of teaching; and many new faculty — even the mediocre scholars — want to publish instead of teach. In the new small college, both win. Everyone looks the other way while courses are neglected for the sake of publications. What few devoted teachers will admit — because to do so would be impolitic — is that it is impossible to teach a 4-4 or even a 3-3 load effectively and publish a book pre-tenure without working “too hard.” What’s more, when you suggest that a small teaching college should prioritize teaching over publishing, what your colleagues hear you say is, “I am not good enough to publish.”

Sadly, many of the students also think they win in this scenario. They get good grades with little work. Once a culture like this is established, a new faculty member who is serious about teaching rocks the boat. And if she still somehow manages to excel in all the other required areas, she might be sunk. Unfortunately for the small schools, the best solution for her might be to jump ship.

"Teaching Professors to Be More Effective Teachers," Elizabeth Redden, Inside Higher Ed, October 31, 2007 --- http://www.insidehighered.com/news/2007/10/31/ballstate

David W. Concepción, an associate professor of philosophy, came to the first workshop series in 2003 wondering why “students in courses for some number of years said, ‘I get nothing out of the reading’” (specifically the primary philosophy texts). Discovering through student focus groups that what they meant was that they couldn’t ascertain the main points, Concepción realized that he needed to explain the dialogical nature of philosophy texts to students in his 40-person introductory philosophy course.

Whereas high school texts tend to be linear and students read them with the objective of highlighting facts paragraph by paragraph that they could be tested on, “Primary philosophical texts are dialogical. Which is to say an author will present an idea, present a criticism of that idea, rebut the criticism to support the idea, maybe consider a rejoinder to the rebuttal of the criticism, and then show why the rejoinder doesn’t work and then get on to the second point,” Concepción says.

“If you are reading philosophy and you’re assuming it’s linear and you’re looking for facts, you’re going to be horribly, horribly frustrated.”

Out of the workshop, Concepción designed an initial pedagogical plan, which he ran by fellow workshop participants, fellow philosophy faculty, junior and senior philosophy majors, and freshmen philosophy students for feedback. He developed a “how-to” document for reading philosophy texts (included in a December 2004 article he published in Teaching Philosophy, “Reading Philosophy with Background Knowledge and Metacognition,” which won the American Association of Philosophy Teachers’ Mark Lenssen Prize for scholarship on the instruction of philosophy).

Based on the constructivist theory of learning suggesting that students make sense of new information by joining it with information they already have, his guidelines suggest that students begin with a quick pre-read, in which they underline words they don’t know but don’t stop reading until they reach the end. They then would follow up with a more careful read in which they look up definitions, write notes summarizing an author’s argument into their own words on a separate piece of paper, and make notations in the margins such that if they were to return to the reading one week later they could figure out in 15 seconds what the text says (a process Concepción calls “flagging).

Concepción also designed a series of assignments in which his introductory students are trained in the method of reading philosophy texts. They are asked to summarize and evaluate a paragraph-long argument before and after learning the guidelines (and then write a report about their different approaches to the exercise before and after getting the “how-to” document on reading philosophy), turn in a photocopy of an article with their notations, and summarize that same article in writing. They participate in a class discussion in which they present the top five most important things about reading philosophy and face short-answer questions on the midterm about reading strategies (after that, Concepción says, students are expected to apply the knowledge they’ve learned on their own, without further direct evaluation).

The extra reading instruction has proven most beneficial for the weakest students, Concepción says — suggesting that the high-performing students generally already have the advanced reading skills that lower performers do not.

“What happened in terms of grade distribution in my classes is that the bottom of the curve pushed up. So the number of Fs went down to zero one semester, the Ds went down and the Cs stayed about the same in the sense that some of the former C performers got themselves in the B range and the Fs and the Ds got themselves in the C range. There was no difference in the A range, and not much difference in the B range.”

Meanwhile, in his weekly, 90-person lecture class on World Mythology, William Magrath, a full professor of classics, also saw significant drops in the number of Fs after developing targeted group work to attack a pressing problem: About a quarter of freshmen had been failing.

“I had been keeping very close records on student performance over the semester for the previous five or six years and noticed that there was a pattern wherein a lot of the freshmen were having real difficulty with the course. But it wasn’t so much that they weren’t performing on the instruments that they were given but rather that they weren’t taking the quizzes or weren’t taking the tests or weren’t getting the assignments in,” Magrath says.

Discovering that he could predict final grades based on student performance in just the first four weeks of class with remarkable accuracy, he divided the freshmen into groups based on their projected grades: the A/Bs, B/Cs and Ds/Fs (No – he didn’t call them by those names, but instead gave the groups more innocuous titles like “The Panthers.”)

Meeting with each set of students once every three weeks for one hour before class, he gave the A/Bs a series of supplemental assignments designed to challenge them. For instance, he would give them a myth on a particular theme and ask them to find three other myths connected to that theme for a group discussion. Meanwhile, the Ds/Fs took a more structured, step-by-step approach, completing readings together and discussing basic questions like, “How do you approach a story, what do you look for when you face a story, how would you apply this theory to a story?”

Meanwhile, Magrath says, the B/C students didn’t complete supplemental reading, but were instead expected to post questions about the readings or lectures that he would answer on the electronic class bulletin board – with the idea that they would remain engaged and involved in class.

In the end, Magrath found the smallest difference for B/C students. But the overall average of students climbed from 1.9 in 1999-2002, before the group work was put in place, to 2.4 in 2003-5. Of all the Fs he gave, the percentage given to freshmen (as opposed to upperclassmen in the class, who did not participate in the group work) fell from 63 to 11 percent.

When, in 2006, Magrath stopped conducting the group work in order to see what the effect might be, performance returned to earlier levels.

“The dynamic of this class is a large lecture class with the lights dimmed at night on Thursdays once a week. The kids feel anonymous almost right away. That anonymity gets broken by virtue of being with me,” Magrath says. He adds that while he has also replicated the group work format in the spring semester, the results weren’t as dramatic — suggesting, he says, that freshman fall is the critical time to get students on track.

“If what [first-semester freshmen] are experiencing in the classroom isn’t accommodating for them, they don’t know what to do. They genuinely don’t know what to do,” he says.

As for steps forward, Ranieri, the leader of the initiative, says that the Lumina grant – which included funds for faculty stipends of $2,400 the first year and $2,000 in subsequent years (faculty who participated in the first two years continued to participate in workshops and receive funding through the end of the three-year cycle) — has been exhausted. However, he hopes to expand a report he’s writing — which tracks retention and GPA data for students who enrolled in the “Lumina” courses as freshmen throughout their college careers — for publication.

So far, Ranieri says, the various professors involved have given 13 national or international presentations and produced four peer-reviewed publications.

“One of the biggest problems you have in higher education,” he says, “is allowing faculty members to be rewarded for this kind of work.”

 

October 30, 2007 reply from Linda A Kidwell [lkidwell@UWYO.EDU

There was an article in the Smith College Alumnae Magazine several years ago about one of my favorite professors at Smith, Randy Bartlett in economics. My second semester of senior year, I was done with all my required courses and swore I would not take another 8:00 class, but one of my friends told me to give his 8am Urban Economics class a try. He opened class that first day by reading Carl Sandberg's poem Chicago, and I was hooked -- back into an unnecessary 8 o'clock class by choice! And he was indeed a wonderful teacher. He read that poem again after a semester of urban econ, and it took on a whole new meaning.

Although I was unaware of his research activities at the time, the article I mentioned contained this wonderful quote I have kept on my wall since then:

"I carry out the research and publish because it keeps my mind lively. I can't ask my students to take on hard work without my doing the same."

When I wonder about the significance of my contributions to the field, I read that quote.

For those who don't know the poem, here it is:

CHICAGO

HOG Butcher for the World,  
      Tool Maker, Stacker of Wheat,  
      Player with Railroads and the Nation’s Freight Handler;  
      Stormy, husky, brawling,  
      City of the Big Shoulders:         5
 
They tell me you are wicked and I believe them, for I have seen your painted women under the gas lamps luring the farm boys.  
And they tell me you are crooked and I answer: Yes, it is true I have seen the gunman kill and go free to kill again.  
And they tell me you are brutal and my reply is: On the faces of women and children I have seen the marks of wanton hunger.  
And having answered so I turn once more to those who sneer at this my city, and I give them back the sneer and say to them:  
Come and show me another city with lifted head singing so proud to be alive and coarse and strong and cunning.         10
Flinging magnetic curses amid the toil of piling job on job, here is a tall bold slugger set vivid against the little soft cities;  
Fierce as a dog with tongue lapping for action, cunning as a savage pitted against the wilderness,  
      Bareheaded,  
      Shoveling,  
      Wrecking,         15
      Planning,  
      Building, breaking, rebuilding,  
Under the smoke, dust all over his mouth, laughing with white teeth,  
Under the terrible burden of destiny laughing as a young man laughs,  
Laughing even as an ignorant fighter laughs who has never lost a battle,         20
Bragging and laughing that under his wrist is the pulse. and under his ribs the heart of the people,  
                Laughing!  
Laughing the stormy, husky, brawling laughter of Youth, half-naked, sweating, proud to be Hog Butcher, Tool Maker, Stacker of Wheat, Player with Railroads and Freight Handler to the Nation.

Carl Sandberg 1916

Linda Kidwell University of Wyoming

October 30, 2007 reply from Patricia Doherty [pdoherty@BU.EDU]

You know, Linda, somehow your post brought to my mind something from my own undergraduate days at Duquesne University. I was a Liberal Arts student, and had to take, among other things, 4 semesters of history. I came into it dreading it - I'd hated history in high school - all memorization and outlining of chapters. The first college semester was no improvement - an auditorium lecture with hundreds of students, a professor lecturing for 50 minutes, and a TA taking attendance. Then came the second semester. I looked for, and found, a smaller class. The professor (whose name escapes me right now) was a "church historian," researching history from the viewpoint of world religions. He began the first class by reading an excerpt from Will Cuppy's "The Decline and Fall of Practically Everybody." Had us rolling in the aisles. He kept at it the whole term, interspersing history with Cuppy readings and anecdotes from actual history. I loved that class.

And Will Cuppy is on my shelf to this day. And that professor awakened in me a love of history. I read history, historical novels, watch history films (fiction and non) to this day. All because one professor thought history was a living thing, not a dead timeline, and managed to convey that to a bunch of jaded sophomores.

p


Question
What is mastery learning?

April 24, 2006 message from Lim Teoh [bsx302@COVENTRY.AC.UK]

I am a Malaysian but currently teaching in the UK. Please forgive me if I failed to express myself clearly in English.

I just joined the discussion list months ago and found a lot of useful information for both my research and teaching career development. My sincere thanks to AECM.

As I plan to start my PhD study by end of this year, I would like to ask for your help to get some references to my research topic. I am interested in mastery learning theory and programmed instruction; I'll research into the application of these theories to accounting education. I aim to explore how the accounting knowledge can be disseminated or transferred more effectively to a large group of students.

Are there any useful databases or websites that could help me to start with this PhD reseach? Is this research topic outdated or inappropriate for me to proceed further?

Looking forward to receiving your advice and guidance.

Kind regards,

Lim
Coventry University United Kingdom

April 24, 2006 reply from Bob Jensen

Hi Lim,

Here are some possible links that might help:

Differences between "popular teacher" versus "master teacher" versus "mastery learning" versus "master educator" --- http://www.trinity.edu/rjensen/assess.htm#Teaching 

Also see “Mastery Learning” by http://www.humboldt.edu/~tha1/mastery.html 
This provides references to the classical literature on learning theory by Benjamin Bloom.

One of the most extensive accounting education experiments with mastery learning took place under an Accounting Education Change Commission Grant at Kansas State University. I don't think the experiment was an overwhelming success and, to my knowledge, has not been implemented in other accounting programs:

http://aaahq.org/facdev/aecc.htm

http://aaahq.org/AECC/changegrant/cover.htm 

To find a comprehensive list of references, feed in “Benjamin Bloom” and “Learning” terms into the following links:

Google Scholar --- http://scholar.google.com/advanced_scholar_search?hl=en&lr= 

Windows Live Academic --- http://academic.live.com/ 

Google Advanced Search --- http://www.google.com/advanced_search?hl=en 

You might also be interested in metacognitive learning --- http://www.trinity.edu/rjensen/265wp.htm

You can also read about asynchronous learning at http://www.trinity.edu/rjensen/255wp.htm 

 


October 14, 2005 message from David Albrecht [albrecht@PROFALBRECHT.COM]

I've encountered something interesting. Two Ph.D. students in Communication contacted me about visiting one or more classes in one of the courses I teach.. Their assignment is to study what a master teacher does. Apparently a list of "master teachers" is kept at BGSU, and my name is on it. Well, they visited a class today, again, and then they interviewed me about teaching.

I think this is a great idea in general. Although I probably would not have adequately appreciated it when I was a "wet behind the ears" Ph.D. student, I think it is a good way to get future professors to think about the craft of thinking. Would something like this be valuable in an accounting Ph.D. program?

BTW, I have no idea how my name got on that list. I don't recall bribing anyone.

David Albrecht

October 14, 2005 reply from Bob Jensen

Hi David,

Congratulations on being singled out on your campus as a "master teacher."

Your message prompted me to think about the difference between "popular teacher" versus "master teacher" versus "mastery learning" versus "master educator."

Master teacher and master educator are not a well defined terms. However "mastery learning" is well defined since the early works of Benjamin Bloom. It generally entails mastery of learning objectives of outside (curriculum) standards that often apply to multiple instructors.  Mastery learning can be accomplished with the aid of master teachers or with no "live" teachers at all. In the ideal case, students must do a lot of intense learning on their own. See http://www.humboldt.edu/~tha1/mastery.html 

One of the most interesting mastery learning graduate accounting programs is Western Canada's Chartered Accountancy (Graduate) School of Business (CASB) --- http://www.casb.com/ 
My friend Don Carter gave me an opportunity to consult in a review of this program several years ago. Courses are heavily "taught" via distance education and mastery learning objectives. It's one of the toughest graduate accounting programs that I've ever witnessed. Students truly master course objectives by a variety of processes.

Master teaching can be a bundle of many things. One usually thinks of an outstanding lecturer who also is an inspirational speaker. However, a master teacher may also be lousy at giving lectures but have fantastic one-on-one teaching dedication and talents. Three such teachers come to my mind in my nearly four decades of being a faculty member in four different universities.

The gray zone is where the teacher is a lousy lecturer and has poor oral communication skills in any environment. Can that teacher be a master teacher simply because he/she developed exceptional learning materials and possibly learning aids such as clever software/games, brilliant course content, and/or unbending standards that lead virtually the entire class to succeed at mastery learning of tough content?

I guess my question is whether a master teacher is defined in terms of mastery (or exceptional) learning versus exceptional motivation for lifelong learning and a dedicated career choice?

Anecdotally, I have been truly inspired by good lecturers in courses where I didn't learn a whole lot but wanted afterwards to learn much more. I have also worked by butt off in some hard courses where I did most of the learning on my own because the teacher didn't teach well but made sure I learned the material. I guess both kinds of teachers are important along the way. I learned to appreciate the latter kind of teacher more after I graduated.

The really hard thing to separate in practice is popular teaching versus master teaching. I like to think of master teaching as leading to mastery learning, but this is not a rigorous definition of master teaching. If half the class flunks, then the teacher cannot be considered a master teacher in a mastery learning environment.

There is one possible definition of a "popular teacher." A popular teacher might be defined as one who gets perfect teaching evaluations from students independently of grading outcomes, including perfect teaching evaluations from virtually anybody she flunks. Petrea Sandlin at Trinity University has that skill. Implicitly this means that such a teacher has convinced students that they are entirely responsible for their own successes or failures. But if half the class flunks without blaming the teacher, can the teacher be considered a popular teacher but not a master teacher? (By the way, Petrea's passing rates are much higher and I consider her to be a master teacher as well as a popular teacher.  This was duly recognized when she won an all-university teaching award of $5,000.)

Perhaps what we really need is a more precise distinction between "master teacher" versus "master educator."  A master teacher brings students into the profession, and a master educator makes sure they ultimately qualify to enter into and remain in the profession.

In any case, congratulations David! I hope you are a master teacher and a master educator.

Bob Jensen

October 15, 2005 reply from Mooney, Kate [kkmooney@STCLOUDSTATE.EDU]

I'm detecting a subtle thread here--a master teacher can get everyone to pass. Can't agree with that, especially at a public, state school that isn't the flagship institution in the state. Sometimes all the teaching and studying in the world won't be successful because the brainpower isn't there. In that situation, I believe the master teacher constructs the course and teaches in such a way that the students who can be successful in the major/profession get through the filter. Those folks in the filter course need to be master teachers AND courageous. (Note: I don't teach the filter course but wholeheartedly support the guy who does.)

Our pre-business advising group often wishes for a sorting hat like that in the Harry Potter books to eliminate the pain of failing in the first intermediate accounting course.

Just another lurker muddying an otherwise crisp discussion,
K

October 15, 2005 reply from Bob Jensen

Hi Kate,

You make a very good point. Perhaps we can work toward a definition of master teacher as one who draws out every bit of brain power that is there even though there may not be enough brain power and whatever else it takes for mastery learning or even what it takes to pass a course by the teacher's own standards.

I might note that most college courses are not mastery learning courses. If the instructor both teaches the course and sets the standards, the standards may vary from instructor to instructor even when they teach virtually the same course. Some instructors set lower standards in an effort to instill confidence and keep troubled students from giving up entirely. Other instructors set high standards because of their allegiance external criteria. For example, some might view it as unethical to hold out promise that all students can become engineers, CPAs, medical doctors, or computer scientists. Maximal effort on the part of some students just will not cut it later on.

Mastery learning by definition implies some type of external standards imposed upon all instructors teaching virtually the same course. Professional certification examinations (e.g., medical examinations, bar exams, and CPA examinations) often dictate many of the mastery learning standards in professional studies.

Many college professors despise mastery learning because they feel it converges on training (albeit tough training) as opposed to education (where learning how to learn is deemed paramount).

I'm still troubled by the definition of a master teacher. I don't think there is a single definition, although any definition must weigh heavily upon instilling a motivation to learn. You are correct, Kate, in pointing out that motivation alone is not enough for some students. There probably is no threshold level (such as 60%) of passage rate in the definition of a master teacher.

I'm less troubled by a definition of a master educator. I don't think there is a single definition, but I do think that the criterion of motivation weighs less heavily than dedication to external (mastery) standards and exceptional skills is preparing students to meet mastery standards. Here there is also no threshold passage rate, but the expectation might be lower than for a master teacher because the standards might be set higher by the master educator. One would only hope so in the final years of studies to become a brain surgeon.

Bob Jensen

October 16, 2005 reply from Stokes, Len [stokes@SIENA.EDU]

I feel it takes as much effort from a student to get an "F" as an "A" just in the opposite direction. Having said that I think it is the teacher who can get "C" brain power to be motivated to do "B" or better work, or similar things with other students that deserves to be recognized as the master teacher.

My $.01 worth.
len

October 15, 2005 reply from Roberta Brown Tuskegee University [RBrown1205@AOL.COM]

This thread reminded me of one of my first successful grant funding searches when I was working in the Engineering Division at Tuskegee University. I found a National Science Foundation funded grant that essentially taught engineering faculty certain education principles and techniques. Many college faculty get their teaching position after coming directly from the private sector, where they worked as mechanical, electrical, etc., engineers, and they did not take education courses in college. A professor at West Point developed the course, and offered it through NSF, and an acting engineering dean at Tuskegee was awarded funding for the program to come to the University for a number of years.

I am not sure if the program is still ongoing at Tuskegee (it started in the late 1990's), but I see the program offering at

http://www.dean.usma.edu/cme/cerc/1996-1997/T4E 1997.htm 

I wonder if accounting professors can also become college faculty directly from the private sector, without education credits?

 


Degrees Versus Piecemeal Distance (Online) Education

"Offering Entire Degrees Online is One Key to Distance Education, Survey Finds,"  by Dan Carnevale, The Chronicle of Higher Education, November 26, 2005, Page A1

The distance-education programs that offer entire degrees online are more successful than those that offer only a scattering of courses, a new survey has found.

The report, titled "Achieving Success in Internet-Supported Learning in Higher Education," was written by Rob Abel, president of a nonprofit organization called the Alliance for Higher Education Competitiveness.  The report was set to be released this week.

Mr. Abel says the organization wanted to find out what made a distance-education program successful and to share the information with other institutions.  The organization surveyed officials at 21 colleges and universities that it determined to be successful in distance education.  In their responses, college officials highlighted the need for such common elements as high-quality courses and reliable technology.

But what struck Mr. Abel as most important was that 89 percent of the institutions created online degree programs instead of just individual online courses.  Online degree programs lead to success, he says, because they tend to highlight a college's overall mission and translate into more institutional support for the faculty members and students working online.

"It's easier to measure the progress at a programmatic level," Mr. Abel says.  "The programmatic approach also gets institutions thinking about student-support services."

Of course, success is subjective, he says, and what may be deemed successful for one institution may not work at another.

But he found that some college officials believe distance education has not lived up to their expectations.  He hopes that some colleges will learn from institutions that have succeeded online.  "These particular institutions didn't see this as a bust at all," Mr. Abel says.  "Maybe that just means that they set realistic expectations."

SUCCESS STORIES

One of the institutions included in the report is the University of Florida, which enrolls more than 6,000 students in its online degree programs.  William H. Riffee, associate provost for distance, continuing, and executive education at the university, says Florida decided to move forward with a strong distance-education program because so many students were demanding it.

"We don't have enough seats for the people who want to be here," Mr. Riffee says.  "We have a lot of people who want to get a University of Florida degree but can't get to Gainesville."

The university does not put a cap on enrollments in online courses, he says.  Full-time Florida professors teach the content, and part-time faculty members around the country field some of the questions from students.

"We have learned how to scale, and we scale through an addition of faculty," Mr. Riffee says.  "You scale by adding faculty that you have confidence will be able to facilitate students.

Another college the organization deemed successful in distance education is Westwood College, a for-profit institution that has campuses all over the country, in addition to its online degree programs.  Shaun McAlmont, president of Westwood College Online, says some institutions may have trouble making the transition to online education because higher education tends to be slow to change.

"How do you introduce this concept to an industry that is very much steeped in tradition?"  he asks.  "You really have to re-learn how you'll deliver that instruction."

Mr. McAlmont, who has also spent time as an administrator at Stanford University, says non-profit institutions could learn a lot from for-profit ones when it comes to teaching over the Internet.

Continued in article

Bob Jensen's threads on distance education are at http://www.trinity.edu/rjensen/crossborder.htm 


You can read more about such matters at http://www.trinity.edu/rjensen/255wp.htm

Also see the Dark Side and other documents at http://www.trinity.edu/rjensen/000aaa/0000start.htm

For threaded audio and email messages from early pioneers in distance education, go http://www.trinity.edu/rjensen/ideasmes.htm