Podcast Links – Spring 2017

Over the past few years, I’ve started listening to a large number of podcasts. My subscriptions are increasingly focused on podcasts related to higher education (teaching and research) and time management. Thought it might be useful to list and link to some of them here!

Standards Based Grading: Semester Review, Part 1

As mentioned in a previous post, I employed standards-based grading (SBG) in my parallel programming course this semester. I will try to highlight in a few posts how I felt it went and how I will try to make use of it in the future. This post discusses some general student feedback on the use of SBG and discusses how I designed the class.

Twenty of the twenty-eight students in the course responded to a set of extra course evaluation questions that I provided, and of those, about 10 were positive to very positive. The most common positive comments were along the lines of:

  • The course assessment style reduced the stress of tests and allowed more focus on the material than cramming for tests
  • Reassessments allowed improving my understanding of the material over time
  • I think I will have a deeper understanding of the material after the course ends

The gist of the comments that were less positive were:

  • The grading scale and processes were initially confusing
  • Small errors on assessments leading to an “R” was frustrating
  • The approach seemed to be more work for the professor than typical courses (in time spent crafting assessments and grading)

I started off the course planning to cover and assess a total of 23 “knowledge” standards and 3 “programming” standards (one programming standard for each different parallel programming toolkit that was covered in the course). I ended up only assessing the students on 13 knowledge standards and the 3 programming standards. We covered some material related to additional standards during the last two or so weeks of class, but I didn’t end up assessing the students on that material.

I used the first half of most Tuesdays in the course as an assessment period, where students would take one or two assessments (chosen by me) and then could take one reassessment (chosen by the student). Students could reassess on a standard as many times as they wanted, under the constraint they could do at most one reassessment a week. Students had to submit an online form by noon on Mondays indicating which standard they wanted to be reassessed on. The midterm and final exam periods were reassessment periods where students could opt to do as many reassessments as they preferred on those days.

For the programming standards, I assigned 8 problems over the course of the semester that would take between 1 and 2 weeks each to complete. Students were required to submit 6, two for each the thre parallel programming APIs we learned about during the semester. The initial plan was that students, if they needed to be reassessed on a programming problem, could complete another of the 8 problems using the same toolkit as they had used in solving the original problem they were requesting reassessment on. Due to slowness on my part in getting some of the labs back (they took a while longer to score and provide useful feedback than I expected), I ended up allowing “correcting and resubmitting a given problem” to be used to raise an R grade to an M, or submission of a solution to another problem if the original score was an R or M and an the opportunity to earn up to an E was desired .

All grading on assessments and programming problems was done on an EMRN scale. Homeworks (called guided practice) were graded on an effort basis and consisted of 4-5 questions that related to a pre-lecture reading. One point per question was awarded if the student made a good faith effort on a question.

The overall semester grade was based primarily on how many standards a student achieved an M or E on. The homework grade influenced whether a + or – was awarded based on the percentage of homework grades earned. Students were given five tokens at the start of the semester. These could be submitted (1 token per programming problem) to have a 48-hour no-questions-asked deadline submission on the programming problems. Any remaining tokens at the end of the course could be submitted and credited towards guided practice points at a rate of 1.5 points per token.

Here is a list of the knowledge and programming standards that ended up being covered, and here is the overall semester grade scale. 05/31/2017 edit: Here is the rubric I used to grade programming problems.

I hope to write two more posts on this in the future:

  • Some statistics on the course (number of reassessments taken, etc).
  • Things I would change or like to try in a future SBG offering of the course and in using SBG in my other classes.

Assignment Analytics

Over this summer (summer 2017), I plan to perform some exploratory analytics regarding the completion of assignments for my spring 2017 classes. Assignment completion data can be extracted from many learning management systems, including Sakai and Canvas. In my case, I will be extracting the information from Google Classroom.

In both of my classes, there were required assignments and optional assignments.

  • Required assignments were assignments where students received a grade based on the assignment. The grade was either based on completion (the work had to show reasonable effort) or was based on some traditional scoring rubric.
  • Optional assignments were used to provide students additional practice on topics. Students did not receive a grade on optional assignments, but did receive feedback from the instructor. Feedback was almost always offered after submission of the optional assignment, and occasionally while students were still working on the assignment.

Feedback provided before a submission was due is possible given the fact that underlying Google Classroom is Google Docs, and the teacher of the course can co-edit and comment on assignment documents at any time. This also allows determining if a student did any work on an assignment, even it if was not formally submitted. Assignment submission effectively revokes further student editing. In Google Classroom, students can also “un-submit” work before it is graded, which is a means for supporting resubmissions. This is of interest in supporting the idea that students may discover an error on their own and revise their solutions. (Sakai, the traditional LMS we use at WFU, explicitly supports multiple submissions of an assignment as well).

Questions I plan to ask include:

  • For required assignments,
    • Percentage of students that complete an assignment
    • Assignment completion time relative to due date
    • Number of re-submissions before due date
  • For optional assignments (practice sets),
    • Percentage of students that complete an assignment
    • Percentage of students that complete any part of an assignment
    • Percentage of students that complete an assignment after intermediate feedback
    • Assignment completion time relative to the next assessment date
    • Number of students that completed k optional assignments (i.e. how many students completed 1 optional assignment, how many completed 2, etc.)
    • Intersection set of students that completed k optional assignments (i.e. is it the same subset of students completing each optional assignment or does it differ (based on perceived understanding or the like)?)

I hope to use the collected data to inform at least:

  • The utility of providing optional assignments (were they made use of?, were certain parts of them made use of?, were they used immediately after a topic was covered or as test review?)
  • The utility of providing intermediate feedback (did it encourage further work on an optional assignment)?
  • Spacing of assignment due dates within a course (were due dates too close?)
  • Appropriate assignment due times (does an 8am due time lead to early morning hour submissions)?

I likely can’t share much detailed data on this blog due to student privacy issues, but I hope to at least be able to summarize some high-level take-aways. My goal is to have some of this information available by the middle of July 2017.

Thoughts on crafting a recursion problem for teaching CS1/CS2

This semester (Spring 2017), I taught our CS2 course. About a week of this course covers recursion after an introduction to the topic in CS1. One problem I posed to my students I have found to be “nifty” and I plan to use similar problems in the future.

The students are first exposed to this version of a “find in array” function which treats the array as a recursive data-structure. It has a very “standard” feeling recursive structure: if-statement to check a base case, else-if-statement to check another base case, else-statement to handle the recursive case. (You may need to click on the image to enlarge it, then hit the back button to return to this post).

I then present the following recursive function (again, you may need to click on the image to enlarge it, then hit the back button to return to this post):

and ask the students to address the following questions.

  1. What is the evidence that this is a recursive function?
  2. In “simple sentences” (not code), describe the two scenarios when this function would return false?
  3. How is an array of some size N treated recursively – as a single element followed by a size N-1 array, or as two size N/2 arrays?
  4. Under what conditions will this function not make use of the recursive function call? You may want to refer back to the idea of lazy (AKA short-circuit) evaluation studied earlier in the course.

Students generally did very well on questions 1 and 3, but there were a number of errors on Problems 2 and 4. To help students in the right direction, I suggested they think about it as large X && (Y || Z) expression, and then to think about under what conditions that expression would a) return true or false, b) would be fully evaluated or not, and c) how a lazy evaluation analysis of the expression relates to the original recursive implementation.

I plan on using similar problems in the future as this style of problem exposes to the students the notion that not all recursion has the if-[else if]-else nature, and it requires thinking through the nature of how the AND operation, OR operation, and lazy evaluation work.

PythonTutor visualizations of each implementation are available via these links: standard version, lazy evaluation version

Standards Based Grading

This semester (Spring 2017), I am implementing specification/standards-based grading, as espoused in this Linda Nilson book, as well as by an increasing number of practicioners. This is being implemented in my course on parallel computation.

Two faculty, both in mathematics departments, that I have been following closely with respect to how they implement this type of grading are Robert Talbert at Grand Valley State University and Kate Owens at the College of Charleston (links are to their respective blog or webpage that provides insights into their methods). While there are faculty in other fields at the collegiate level (and many, many faculty in the K-12 arena), I am particularly interested in following along with those involved in mathematics, given the similarity between computer science and mathematics in many aspects of the college curriculum and pedagogical approaches.

Over the next few months, I hope to post additional information on my course design as well as feedback from my students. As a starting piece of information, here are the 23 undergraduate standards I chose for my course this semester.

Pythontutor Visualisation

In my previous blog entry, I mentioned how I was planning on using Phillip Guo’s PythonTutor for C++ to support having students in my online CS2 class work collaboratively on some of the class labs. While I really appreciate that capability, I am also significantly impressed by the visualisation capabilities of the site’s tools. After the jump is a video I made explaining insertion sort to the students in my class. Jump to 7 minutes, 20 seconds and check out the ability to watch items in the list being slid right and then the item of interest being dropped in the right place. Continue reading Pythontutor Visualisation

Online collaborative coding in CS2

This summer I will be teaching for the first time an online version of our CS2 course, the second course required for computer science majors. As one part of the course, I would like the students to work collaboratively in programming. While this would normally be quite straight-forward in a face-to-face setting, when the class is online, it becomes a lot more difficult.

I envisioned that there might be something like Google Docs for coding – in terms of cost, ease of use, revision history, etc – alas, Google doesn’t appear to support any such thing.

For my class, the system needs to support C++, since that is the language taught in our CS2 course. I found various IDEs (particularly cloud IDEs) that seem to be heading in the right direction (I’ll post a set of links here soon), but nothing felt quite right. Right now, I’m leaning towards having the students make use of Philip Guo’s pythontutor.com, with its shared session capabilities. The students will work in pairs.

I’ll write a review of how things worked out at the end of the summer. The class is fairly small (student numbers wise) and I’ll have the students work just in pairs, so I don’t think it will stress the server system (I’m always worried about unleasing a class onto a web system!). One thing I’m already aware of as an issue is that the C++ part of the pythontutor site does not support standard in, so I’ll need to be a little creative in the design of the collaborative labs.

Learning Space Design

On Wednesday, 3/23, I was able to participate in a seminar on learning space design at Wake Forest entitled: New Spaces, New Opportunities. Wake Forest has invested in updating a number of new classrooms, including one in my department (an image of that classroom is available here).

The primary focus of the seminar was to think about and actualize ways to make use of these new classroom designs in active ways. The session was led by Dana Gierdowski, a learning space researcher and a teaching assistant professor of rhetoric and composition at NC State University. She maintains a blog about flexible classrooms and activities appropriate to make use of such classrooms.

As part of the seminar, we were tasked with designing an active group activity that would make use of mobile whiteboards and an open space. Our group developed an activity where students initially start off in groups examining artifacts (a set of artifacts at each station) becoming experts in the details of those artifacts. They would then rotate (one person from each group rotating to another station) where new groups (with one expert from each artifiact group) would work to generalize and find patterns in what they had learned at the detail-level at their original stations. While this was in the context of Anthropology, I think I could map it to Computer Science as well.

Process Introspection

My original goal with this blog was to post every few days; it currently appears my pace is closer to every few weeks.

Stemming from a faculty development seminar I’m involved in and personal interest, I am quite interested in learning more about the processes others employ with respect to time allocated to research, to teaching, and to personal life. It seems there a quite a number of computer science faculty who are putting effort into refining their own processes and sharing their experiments and experiences. A few blogs I’ve really started getting into are the following:

I plan to try to update this blog with things that work/don’t work for me as I take on my own process improvement experiments.