Experiences with using tokens in my college classroom

This is a short blog to support the idea of using tokens in the college classroom.

A loose definition of a token as used here is a virtual unit issued to students that can be redeemed to support success in a course. Common uses are that students can redeem course tokens in exchange for deadline extensions, excused absences, or attempts to revise and resubmit work.

Much of the use of tokens appears to be tied to practitioners of specification- and standards-based grading, with Linda Nilson suggesting tokens as a key component of her model of specifications grading. Relevant resources from Linda Nilson include a link to her specifications-based grading book and a link to an Inside Higher Ed article of hers discussing specification grading and tokens.

The notion of tokens is a net positive along a number of axes in my opinion:

  • Providing a fair means of dealing with late homework, makeup tests, and related issues. All students are likely to encounter some issue when participating in a class – car trouble, a morning of oversleeping, getting sick, and needing to return to family are a few instances I’ve encountered. Instead of a faculty member having to make a decision on a student-by-student basis, weighting which issues are worthy of a makeup, tokens allow for a fair and simple policy to be implemented. They are fair in the sense that all students receive the same number of initial tokens and can apply them as needed. The faculty member is not required to determine which excuses are “worthy” of allowing a late submission or makeup (an often subjective task). In addition, a “no questions asked” policy for submitting tokens, where students just request to use a token instead of having to provide an excuse, allows students increased privacy.
  • Providing a means to support self-regulation in the students. Assuming there are only a handful of tokens that can be made use of in a given semester, students must take into account their own learning trajectory over the rest of the semester when deciding when to make use of tokens.

I made use of tokens in my “Parallel Computation” course last semester (Spring 2017) at Wake Forest. Students were given 5 tokens at the beginning of the semester. A 48-hours “no questions asked” extension to any programming project was provided at the cost of one token (there were 8 programming projects in the course, for which the students were given around 10 days to complete each one). At the end of the semester, each token a student had remaining could be used as additional homework points at a rate of 1 token equivalent to 1.5 homework points. If a student did not use any tokens, redeeming all 5 ended up being the equivalent of accounting for 2 missed homeworks, and homework points were used to determine whether students received +/- grades on top of their A, B, C, D, or F grade earned from mastering course standards. Approximately half the class made use of a token on one or more programming projects.

Some approaches to using tokens only provide students a fixed number of tokens, while other approaches allow additional tokens to be earned in the course, through means such as completing additional practice work or completing work early (incentivizing good study behaviors and engagement with the material) or through means such as attending extra-curricular events (a STEM career session, a library research training workshop, etc).

I was initially inspired to use token by reading the work of Robert Talbert (Grand Valley State Mathematics). An example of his early use of tokens is here, and he continues to make use of tokens, with a recent approach allowing students to use tokens to have additional opportunities to revise and resubmit work and to demonstrate mastery of topic.

Kate Owens (College of Charleston Mathematics) is implementing in Fall 2017 a token economy in her introductory calculus course. Her token economy follows the general ideas of others (allowing deadline extensions, allowing revisions, acting as extra credit at the end of the semester), but she is also making use of (new to me) variable costs where more tokens are required to make up a quiz than to obtain a homework extension.

Nora Sullivan (Citrus College Biology) is also making use of something new to me — the idea of both individual tokens and lab group tokens for her Molecular and Cellular Biology course that includes a lab (the source for this was her syllabus as posted on a Google+ Specification and Standards Based Grading community).

Both the faculty member and students should be able to easily determine the number of tokens a student has left, so it is helpful to make use of a gradebook which supports recording such information. I made use of sharing of grade and token information through a Google spreadsheet in my Spring course, though any LMS which records numerical grades should be able to keep track of this information.

In my Spring 2017 offering, requests to use a token were submitted informally (via email), though I think I may like to use something more formal (such as completing a Google Form) in the future. In addition to providing one central place for me to find requested token usage (instead of having to search through my email), I could also request in the form some additional information. For example, if a student is doing a re-assessment of mastery of a topic, I could ask the student for a few sentences explaining the mistakes they made on the original assessment.

Experiences with Google Classroom

This post will highlight some of my experiences with using Google Classroom and why I anticipate using it in almost all my classes in the future. I decided to write this post both to act as a summary of a presentation I made at Wake Forest to other faculty on TechXPloration day and in response to an inquiry on twitter about using Google Classroom in higher ed.

I first made use of Google Classroom last Spring (Spring 2017) in two courses in the Department of Computer Science at Wake Forest University (WFU). One was “Fundamentals of Computer Science” (the 2nd course for the computer science major) and the other was “Parallel Computation” (a junior/senior level elective). Almost all assignments were distributed and collected through Google Classroom.

There are lots of use cases for Google Classroom:

    • Assignment distribution and collection
    • Maintaining student grades for those assignments managed through Google Classroom
    • Providing feedback/commenting on student work
    • Collaborative writing with students
    • Providing a discussion forum via the Google Classroom Stream page
    • Providing links to resources on the Google Classroom About page
    • Polling using questions (similar to PollEverywhere, clickers, etc).
    • Sending class announcements

Here are the reasons I like Google Classroom (in no particular order).

The first set of reasons stem from working in the “cloud”:

  • At WFU, all students and faculty are already using the Google ecosystem (GMail, Calendar, Drive, etc) and Google Classroom integrates seamlessly with that ecosystem. The Youtube and Google Docs integration are quite useful. This also prevents students from having to context switch – jump from their email, to logging into an LMS to find and download a file, to finding that file on their computer to edit it, etc.
  • Grades provided to students can be exported to a Google Spreadsheet.
  • Any documents that I share as resources (those that are shared “view-only”) can be updated transparently (I can fix typos and everyone sees the corrected version automatically).
  • More than one teacher (such as a second instructor or a teaching assistant) can be associated with the class
  • Because work is stored in the cloud, it is difficult to “lose” documents (a student’s dog can’t eat his or her homework, and I can’t leave graded documents at home on the coffee table).

The second set of reasons (and the most important in my opinion) is that I feel Google Classroom allowed me to focus on feedback and ongoing dialogues with students:

  • The commenting structure (adding comments to submitted Google documents) is very natural. Because I type faster than I write and because I could use copy and paste when needed, I found I was leaving much longer and more detailed comments when typing comments into the Google Classroom submitted docs than when writing by hand on physical paper, and I wasn’t physically limited to writing in the margins or the like. I could also easily add to my comments links to documents, images, and videos on the web that students may want to look at – a task that would be prohibitive when written by hand due to the complexity of many URLs.
  • Students are notified when comments are made, and faculty are notified when students resolve or respond to those comments. This led to some of the most powerful (to me) aspects of using the system — in several cases, my comments would spark a discussion (4-5 back and forth comments) as a student and I resolved one-on-one an uncertainty they had.
  • It is (somewhat) possible to track if students are making progress on assignments (by opening and checking for last modified dates) and it is possible to comment on drafts of work, not just work formally submitted. Most students indicated they fine with me checking in on their progress and providing “intermediate feedback” on their draft work.
  • It is possible to send assignments or announcements to subsets of students, which is useful for supporting personalized learning, group work, handling separate sections of the same class, and other scenarios. Two examples of this I made use of were: sending assignments to just the graduate students in my mixed graduate student/senior/junior course, and sending make-up quizzes to a subset of students.

A third set of reasons is that I feel Google Classroom is resource friendly.

  • Students and I do way less printing, saving paper and toner.
  • I spend less time collecting and returning documents to students in class.

Things that bothered me about Google Classroom were the following:

    • The Gradebook only supports recording grades for assignments actually released through Google Classroom, so it isn’t possible to add something like a “Participation Grade” into the Google Classroom gradebook.
    • If the Google ecosystem goes down, Google Classroom goes down (this is very rare though!)
    • I’m not sure whether students will have access to their submitted and graded materials after they graduate (since everything is in the Google Ecosystem provided by the University) unless they download and print materials out.
    • Students can unsubmit documents at arbitrary times (after you’ve started commenting). This isn’t too much of a problem since you can see they unsubmitted it, but it leaves the door open for some problematic cases. For example, you grade person A who gets a high grade; before you get around to grading person B, person B unsubmits and makes a change to look more like person A’s work. This can be resolved by just making sure to check for whether the work was ever unsubmitted or unsubmitted after the original deadline. If it was, one can see the document revisions
    • Google Classroom works best with Google documents, so if you have data in other types (Microsoft Word for example), it is best to convert them to Google’s types of documents first.

Podcast Links – Spring 2017

Over the past few years, I’ve started listening to a large number of podcasts. My subscriptions are increasingly focused on podcasts related to higher education (teaching and research) and time management. Thought it might be useful to list and link to some of them here!

Standards Based Grading: Semester Review, Part 1

As mentioned in a previous post, I employed standards-based grading (SBG) in my parallel programming course this semester. I will try to highlight in a few posts how I felt it went and how I will try to make use of it in the future. This post discusses some general student feedback on the use of SBG and discusses how I designed the class.

Twenty of the twenty-eight students in the course responded to a set of extra course evaluation questions that I provided, and of those, about 10 were positive to very positive. The most common positive comments were along the lines of:

  • The course assessment style reduced the stress of tests and allowed more focus on the material than cramming for tests
  • Reassessments allowed improving my understanding of the material over time
  • I think I will have a deeper understanding of the material after the course ends

The gist of the comments that were less positive were:

  • The grading scale and processes were initially confusing
  • Small errors on assessments leading to an “R” was frustrating
  • The approach seemed to be more work for the professor than typical courses (in time spent crafting assessments and grading)

I started off the course planning to cover and assess a total of 23 “knowledge” standards and 3 “programming” standards (one programming standard for each different parallel programming toolkit that was covered in the course). I ended up only assessing the students on 13 knowledge standards and the 3 programming standards. We covered some material related to additional standards during the last two or so weeks of class, but I didn’t end up assessing the students on that material.

I used the first half of most Tuesdays in the course as an assessment period, where students would take one or two assessments (chosen by me) and then could take one reassessment (chosen by the student). Students could reassess on a standard as many times as they wanted, under the constraint they could do at most one reassessment a week. Students had to submit an online form by noon on Mondays indicating which standard they wanted to be reassessed on. The midterm and final exam periods were reassessment periods where students could opt to do as many reassessments as they preferred on those days.

For the programming standards, I assigned 8 problems over the course of the semester that would take between 1 and 2 weeks each to complete. Students were required to submit 6, two for each the thre parallel programming APIs we learned about during the semester. The initial plan was that students, if they needed to be reassessed on a programming problem, could complete another of the 8 problems using the same toolkit as they had used in solving the original problem they were requesting reassessment on. Due to slowness on my part in getting some of the labs back (they took a while longer to score and provide useful feedback than I expected), I ended up allowing “correcting and resubmitting a given problem” to be used to raise an R grade to an M, or submission of a solution to another problem if the original score was an R or M and an the opportunity to earn up to an E was desired .

All grading on assessments and programming problems was done on an EMRN scale. Homeworks (called guided practice) were graded on an effort basis and consisted of 4-5 questions that related to a pre-lecture reading. One point per question was awarded if the student made a good faith effort on a question.

The overall semester grade was based primarily on how many standards a student achieved an M or E on. The homework grade influenced whether a + or – was awarded based on the percentage of homework grades earned. Students were given five tokens at the start of the semester. These could be submitted (1 token per programming problem) to have a 48-hour no-questions-asked deadline submission on the programming problems. Any remaining tokens at the end of the course could be submitted and credited towards guided practice points at a rate of 1.5 points per token.

Here is a list of the knowledge and programming standards that ended up being covered, and here is the overall semester grade scale. 05/31/2017 edit: Here is the rubric I used to grade programming problems.

I hope to write two more posts on this in the future:

  • Some statistics on the course (number of reassessments taken, etc).
  • Things I would change or like to try in a future SBG offering of the course and in using SBG in my other classes.

Assignment Analytics

Over this summer (summer 2017), I plan to perform some exploratory analytics regarding the completion of assignments for my spring 2017 classes. Assignment completion data can be extracted from many learning management systems, including Sakai and Canvas. In my case, I will be extracting the information from Google Classroom.

In both of my classes, there were required assignments and optional assignments.

  • Required assignments were assignments where students received a grade based on the assignment. The grade was either based on completion (the work had to show reasonable effort) or was based on some traditional scoring rubric.
  • Optional assignments were used to provide students additional practice on topics. Students did not receive a grade on optional assignments, but did receive feedback from the instructor. Feedback was almost always offered after submission of the optional assignment, and occasionally while students were still working on the assignment.

Feedback provided before a submission was due is possible given the fact that underlying Google Classroom is Google Docs, and the teacher of the course can co-edit and comment on assignment documents at any time. This also allows determining if a student did any work on an assignment, even it if was not formally submitted. Assignment submission effectively revokes further student editing. In Google Classroom, students can also “un-submit” work before it is graded, which is a means for supporting resubmissions. This is of interest in supporting the idea that students may discover an error on their own and revise their solutions. (Sakai, the traditional LMS we use at WFU, explicitly supports multiple submissions of an assignment as well).

Questions I plan to ask include:

  • For required assignments,
    • Percentage of students that complete an assignment
    • Assignment completion time relative to due date
    • Number of re-submissions before due date
  • For optional assignments (practice sets),
    • Percentage of students that complete an assignment
    • Percentage of students that complete any part of an assignment
    • Percentage of students that complete an assignment after intermediate feedback
    • Assignment completion time relative to the next assessment date
    • Number of students that completed k optional assignments (i.e. how many students completed 1 optional assignment, how many completed 2, etc.)
    • Intersection set of students that completed k optional assignments (i.e. is it the same subset of students completing each optional assignment or does it differ (based on perceived understanding or the like)?)

I hope to use the collected data to inform at least:

  • The utility of providing optional assignments (were they made use of?, were certain parts of them made use of?, were they used immediately after a topic was covered or as test review?)
  • The utility of providing intermediate feedback (did it encourage further work on an optional assignment)?
  • Spacing of assignment due dates within a course (were due dates too close?)
  • Appropriate assignment due times (does an 8am due time lead to early morning hour submissions)?

I likely can’t share much detailed data on this blog due to student privacy issues, but I hope to at least be able to summarize some high-level take-aways. My goal is to have some of this information available by the middle of July 2017.

Thoughts on crafting a recursion problem for teaching CS1/CS2

This semester (Spring 2017), I taught our CS2 course. About a week of this course covers recursion after an introduction to the topic in CS1. One problem I posed to my students I have found to be “nifty” and I plan to use similar problems in the future.

The students are first exposed to this version of a “find in array” function which treats the array as a recursive data-structure. It has a very “standard” feeling recursive structure: if-statement to check a base case, else-if-statement to check another base case, else-statement to handle the recursive case. (You may need to click on the image to enlarge it, then hit the back button to return to this post).

I then present the following recursive function (again, you may need to click on the image to enlarge it, then hit the back button to return to this post):

and ask the students to address the following questions.

  1. What is the evidence that this is a recursive function?
  2. In “simple sentences” (not code), describe the two scenarios when this function would return false?
  3. How is an array of some size N treated recursively – as a single element followed by a size N-1 array, or as two size N/2 arrays?
  4. Under what conditions will this function not make use of the recursive function call? You may want to refer back to the idea of lazy (AKA short-circuit) evaluation studied earlier in the course.

Students generally did very well on questions 1 and 3, but there were a number of errors on Problems 2 and 4. To help students in the right direction, I suggested they think about it as large X && (Y || Z) expression, and then to think about under what conditions that expression would a) return true or false, b) would be fully evaluated or not, and c) how a lazy evaluation analysis of the expression relates to the original recursive implementation.

I plan on using similar problems in the future as this style of problem exposes to the students the notion that not all recursion has the if-[else if]-else nature, and it requires thinking through the nature of how the AND operation, OR operation, and lazy evaluation work.

PythonTutor visualizations of each implementation are available via these links: standard version, lazy evaluation version

Standards Based Grading

This semester (Spring 2017), I am implementing specification/standards-based grading, as espoused in this Linda Nilson book, as well as by an increasing number of practicioners. This is being implemented in my course on parallel computation.

Two faculty, both in mathematics departments, that I have been following closely with respect to how they implement this type of grading are Robert Talbert at Grand Valley State University and Kate Owens at the College of Charleston (links are to their respective blog or webpage that provides insights into their methods). While there are faculty in other fields at the collegiate level (and many, many faculty in the K-12 arena), I am particularly interested in following along with those involved in mathematics, given the similarity between computer science and mathematics in many aspects of the college curriculum and pedagogical approaches.

Over the next few months, I hope to post additional information on my course design as well as feedback from my students. As a starting piece of information, here are the 23 undergraduate standards I chose for my course this semester.

Pythontutor Visualisation

In my previous blog entry, I mentioned how I was planning on using Phillip Guo’s PythonTutor for C++ to support having students in my online CS2 class work collaboratively on some of the class labs. While I really appreciate that capability, I am also significantly impressed by the visualisation capabilities of the site’s tools. After the jump is a video I made explaining insertion sort to the students in my class. Jump to 7 minutes, 20 seconds and check out the ability to watch items in the list being slid right and then the item of interest being dropped in the right place. Continue reading Pythontutor Visualisation

Online collaborative coding in CS2

This summer I will be teaching for the first time an online version of our CS2 course, the second course required for computer science majors. As one part of the course, I would like the students to work collaboratively in programming. While this would normally be quite straight-forward in a face-to-face setting, when the class is online, it becomes a lot more difficult.

I envisioned that there might be something like Google Docs for coding – in terms of cost, ease of use, revision history, etc – alas, Google doesn’t appear to support any such thing.

For my class, the system needs to support C++, since that is the language taught in our CS2 course. I found various IDEs (particularly cloud IDEs) that seem to be heading in the right direction (I’ll post a set of links here soon), but nothing felt quite right. Right now, I’m leaning towards having the students make use of Philip Guo’s pythontutor.com, with its shared session capabilities. The students will work in pairs.

I’ll write a review of how things worked out at the end of the summer. The class is fairly small (student numbers wise) and I’ll have the students work just in pairs, so I don’t think it will stress the server system (I’m always worried about unleasing a class onto a web system!). One thing I’m already aware of as an issue is that the C++ part of the pythontutor site does not support standard in, so I’ll need to be a little creative in the design of the collaborative labs.