A Few Open Textbook Resources for Computer Science

I have recently become interested in using open and free textbooks in the classes where I have the option to choose the text. Here are some resources I’ve found useful and links to books I’ve used in the past.

First, here is a link to the Computer Science and Information Systems section of the Open Textbook Library hosted at the University of Minnesota. One might also be interested in the Mathematics and Engineering sections of that site.

For CS1 (the intro course for the CS major), I’ve used the textbook by David Eck: Introduction to Programming Using Java

For parallel programming, I’ve used two open textbooks by Victor Eijkhout in conjunction with each other: Introduction to High Performance Scientific Computing (for general information) and Parallel Programming in MPI and OpenMP (for toolkit specific information).

Finally, for operating systems, which I will teach for the first time next fall (Fall 2018), I am intrigued by the textbook by Remzi H. Arpaci-Dusseau and Andrea C. Arpaci-Dusseau : Operating Systems: Three Easy Pieces.

SIGCSE themes of interest (Part 1)

This is the second in a series of blog posts I’m hoping to write with some thoughts on the SIGCSE 2018 accepted publications (first post here). Note that these are “pre-symposium” posts where I’m trying to look over the list of accepted papers/workshops/etc. and from the titles subdivide them into different thematic categories. If I can attend SIGCSE this year, I hope this categorization can guide me into what sets of talks I might like to attend. Since this is being done without access to the actual papers, I apologize if my interpretation of the categories are wrong.

For this post, the categories are (along with my interpretation of the category):

  • Learning objectives – Definition of explicit goals and objectives for students to be able to achieve by the end of a curriculum, course, unit, or course session.
  • Collaborative learning – Students working in groups, exploiting the advantages of peer-to-peer learning
  • Flipped learning – Models of instruction where traditional lecture content is moved before class sessions and active, engaged learning exploiting the presence of peers and the instructor is done during class sessions. In many cases, video is used for a pre-class (mini-)lecture, though this is not required!

Links to authors in the list below are to homepages where I could find them.

There appear to be two papers related to learning objectives:

  • A Systematic Review of the Use of Bloom’s Taxonomy in Computer Science Education – Susana Masapanta-Carrión (Pontificia Universidad Católica del Ecuador); J. Ángel Velázquez-Iturbide (Universidad Rey Juan Carlos)
  • Developing Course-Level Learning Goals for Basic Data Structures in CS2Leo Porter (UC San Diego); Daniel Zingaro (University of Toronto, Mississauga); Cynthia Lee (Stanford University); Cynthia Taylor (University of Illinois at Chicago); Kevin Webb (Swarthmore College); Michael Clancy (University of California, Berkeley)

I’m also going to tentatively group the following paper into the learning objectives theme:

  • Tracing vs. Writing Code: Beyond the Learning HierarchyBrian Harrington (University of Toronto Scarborough); Nick Cheng (University of Toronto Scarborough)

For the category of collaborative learning, there appear to be three papers and a panel:

  • Analysis of Collaborative Learning in a Computational Thinking ClassBushra Chowdhury (Virginia Tech); Austin Cory Bart (Virginia Tech); Dennis Kafura (Virginia Tech)
  • Docendo Discimus: Students Learn by Teaching Peers Through Video – Pablo Frank-Bolton (The George Washington University); Rahul Simha (The George Washington University)
  • Two-Stage Programming Projects: Individual Work Followed by Peer Collaboration – Apeksha Awasthi (North Carolina State University); Lina Battestilli (North Carolina State University); Paul Cao (University of California San Diego)
  • (Panel) Fostering Meaningful Collaboration in an Interdisciplinary Capstone Course – Liz Hutter (Valparaiso University); Halcyon Lawrence (Georgia Institute of Technology); Melinda McDaniel (Georgia Institute of Technology); Marguerite Murrell (Georgia Institute of Technology)

I have recently become interested in two-stage tests, so the two-stage programming projects notion is intriguing.

Sam Cho, a colleague of mine at Wake Forest in Computer Science, has had his students make videos as part of their own exam review. Those videos then help other students in future semesters. It should be interesting to see research behind this in the Teaching Peers paper. My colleagues students’ videos for a data structures class are here.

Finally, for this post, for the category of flipped learning, there are two papers directly related to flipped learning:

  • Flipped Class Effects on Retention after CS1Celine Latulipe (UNC Charlotte); Audrey Rorrer (UNC Charlotte); Bruce Long (UNC Charlotte)
  • Including Coding Questions in Video Quizzes for a Flipped CS1 – Lisa Lacher (University of Houston – Clear Lake); Albert Jiang (Trinity University); Yu Zhang (Trinity University); Mark Lewis (Trinity University)

and two that seem highly related:

  • Improving Classroom Preparedness Using Guided PracticeSaturnino Garcia (University of San Diego)
  • Comparative Heatmap Visualizations of Student Engagement with Lecture Video and the Instructors Who Love ThemJinyue Xia (UNC Charlotte); David Wilson (UNC Charlotte)

Guided practice, as I’m aware of it per Robert Talbert’s explanation, embodies a well-designed pre-class experience for introducing a new topic, incorporating: an overview of the topic, a set of Basic and Advanced learning objectives for the topic, a set of resources to use in encountering the material for the first time (such as videos or textbook readings), a set of practice problems providing experience with the Basic learning objectives, and a means of submitting the practice work before class so the faculty member can review it. In the actual class session, the students then work with their peers and instructors on mastering the Advanced learning objectives. An example guided practice for a calculus class is here. It should be interesting to see how Saturnino found the use of guided practice to work in his CS classes.

The second related-to-flipped-learning paper appears to be on how students make use of video lectures. This should inform flipped learning scenarios, where video is often used as the means of introducing new material, but I expect should also be broadly informative for however faculty make use of video in CS courses.

In a future post, I hope to write about what I perceive as categories for three more clusters of papers: differentiated instruction, inclusive teaching, and self-regulated learning/meta-cognition.

High-level analysis of SIGCSE 2018 papers

The SIGCSE Symposium is one of the three major conferences in computer science education each year (the other two being ITICSE and ICER). I hope, over the next few weeks, to highlight on this blog some of my thoughts surrounding the papers and sessions to be presented at the 2018 SIGCSE Symposium. The list of accepted papers, panels, and special sessions, and the list of accepted workshops has been available on the SIGCSE website as of early November.

With some free time I had today, I did some “high-level” analysis of the accepted papers (not panels, sessions, or workshops). This analysis might be of interest as both a summarization of trends at this year’s SIGCSE, as well as a potential means of insight into the computer science education community.

Continue reading High-level analysis of SIGCSE 2018 papers

Experiences with using tokens in my college classroom

This is a short blog to support the idea of using tokens in the college classroom.

A loose definition of a token as used here is a virtual unit issued to students that can be redeemed to support success in a course. Common uses are that students can redeem course tokens in exchange for deadline extensions, excused absences, or attempts to revise and resubmit work.

Much of the use of tokens appears to be tied to practitioners of specification- and standards-based grading, with Linda Nilson suggesting tokens as a key component of her model of specifications grading. Relevant resources from Linda Nilson include a link to her specifications-based grading book and a link to an Inside Higher Ed article of hers discussing specification grading and tokens.

The notion of tokens is a net positive along a number of axes in my opinion:

  • Providing a fair means of dealing with late homework, makeup tests, and related issues. All students are likely to encounter some issue when participating in a class – car trouble, a morning of oversleeping, getting sick, and needing to return to family are a few instances I’ve encountered. Instead of a faculty member having to make a decision on a student-by-student basis, weighting which issues are worthy of a makeup, tokens allow for a fair and simple policy to be implemented. They are fair in the sense that all students receive the same number of initial tokens and can apply them as needed. The faculty member is not required to determine which excuses are “worthy” of allowing a late submission or makeup (an often subjective task). In addition, a “no questions asked” policy for submitting tokens, where students just request to use a token instead of having to provide an excuse, allows students increased privacy.
  • Providing a means to support self-regulation in the students. Assuming there are only a handful of tokens that can be made use of in a given semester, students must take into account their own learning trajectory over the rest of the semester when deciding when to make use of tokens.

I made use of tokens in my “Parallel Computation” course last semester (Spring 2017) at Wake Forest. Students were given 5 tokens at the beginning of the semester. A 48-hours “no questions asked” extension to any programming project was provided at the cost of one token (there were 8 programming projects in the course, for which the students were given around 10 days to complete each one). At the end of the semester, each token a student had remaining could be used as additional homework points at a rate of 1 token equivalent to 1.5 homework points. If a student did not use any tokens, redeeming all 5 ended up being the equivalent of accounting for 2 missed homeworks, and homework points were used to determine whether students received +/- grades on top of their A, B, C, D, or F grade earned from mastering course standards. Approximately half the class made use of a token on one or more programming projects.

Some approaches to using tokens only provide students a fixed number of tokens, while other approaches allow additional tokens to be earned in the course, through means such as completing additional practice work or completing work early (incentivizing good study behaviors and engagement with the material) or through means such as attending extra-curricular events (a STEM career session, a library research training workshop, etc).

I was initially inspired to use token by reading the work of Robert Talbert (Grand Valley State Mathematics). An example of his early use of tokens is here, and he continues to make use of tokens, with a recent approach allowing students to use tokens to have additional opportunities to revise and resubmit work and to demonstrate mastery of topic.

Kate Owens (College of Charleston Mathematics) is implementing in Fall 2017 a token economy in her introductory calculus course. Her token economy follows the general ideas of others (allowing deadline extensions, allowing revisions, acting as extra credit at the end of the semester), but she is also making use of (new to me) variable costs where more tokens are required to make up a quiz than to obtain a homework extension.

Nora Sullivan (Citrus College Biology) is also making use of something new to me — the idea of both individual tokens and lab group tokens for her Molecular and Cellular Biology course that includes a lab (the source for this was her syllabus as posted on a Google+ Specification and Standards Based Grading community).

Both the faculty member and students should be able to easily determine the number of tokens a student has left, so it is helpful to make use of a gradebook which supports recording such information. I made use of sharing of grade and token information through a Google spreadsheet in my Spring course, though any LMS which records numerical grades should be able to keep track of this information.

In my Spring 2017 offering, requests to use a token were submitted informally (via email), though I think I may like to use something more formal (such as completing a Google Form) in the future. In addition to providing one central place for me to find requested token usage (instead of having to search through my email), I could also request in the form some additional information. For example, if a student is doing a re-assessment of mastery of a topic, I could ask the student for a few sentences explaining the mistakes they made on the original assessment.

Experiences with Google Classroom

This post will highlight some of my experiences with using Google Classroom and why I anticipate using it in almost all my classes in the future. I decided to write this post both to act as a summary of a presentation I made at Wake Forest to other faculty on TechXPloration day and in response to an inquiry on twitter about using Google Classroom in higher ed.

I first made use of Google Classroom last Spring (Spring 2017) in two courses in the Department of Computer Science at Wake Forest University (WFU). One was “Fundamentals of Computer Science” (the 2nd course for the computer science major) and the other was “Parallel Computation” (a junior/senior level elective). Almost all assignments were distributed and collected through Google Classroom.

There are lots of use cases for Google Classroom:

    • Assignment distribution and collection
    • Maintaining student grades for those assignments managed through Google Classroom
    • Providing feedback/commenting on student work
    • Collaborative writing with students
    • Providing a discussion forum via the Google Classroom Stream page
    • Providing links to resources on the Google Classroom About page
    • Polling using questions (similar to PollEverywhere, clickers, etc).
    • Sending class announcements

Here are the reasons I like Google Classroom (in no particular order).

The first set of reasons stem from working in the “cloud”:

  • At WFU, all students and faculty are already using the Google ecosystem (GMail, Calendar, Drive, etc) and Google Classroom integrates seamlessly with that ecosystem. The Youtube and Google Docs integration are quite useful. This also prevents students from having to context switch – jump from their email, to logging into an LMS to find and download a file, to finding that file on their computer to edit it, etc.
  • Grades provided to students can be exported to a Google Spreadsheet.
  • Any documents that I share as resources (those that are shared “view-only”) can be updated transparently (I can fix typos and everyone sees the corrected version automatically).
  • More than one teacher (such as a second instructor or a teaching assistant) can be associated with the class
  • Because work is stored in the cloud, it is difficult to “lose” documents (a student’s dog can’t eat his or her homework, and I can’t leave graded documents at home on the coffee table).

The second set of reasons (and the most important in my opinion) is that I feel Google Classroom allowed me to focus on feedback and ongoing dialogues with students:

  • The commenting structure (adding comments to submitted Google documents) is very natural. Because I type faster than I write and because I could use copy and paste when needed, I found I was leaving much longer and more detailed comments when typing comments into the Google Classroom submitted docs than when writing by hand on physical paper, and I wasn’t physically limited to writing in the margins or the like. I could also easily add to my comments links to documents, images, and videos on the web that students may want to look at – a task that would be prohibitive when written by hand due to the complexity of many URLs.
  • Students are notified when comments are made, and faculty are notified when students resolve or respond to those comments. This led to some of the most powerful (to me) aspects of using the system — in several cases, my comments would spark a discussion (4-5 back and forth comments) as a student and I resolved one-on-one an uncertainty they had.
  • It is (somewhat) possible to track if students are making progress on assignments (by opening and checking for last modified dates) and it is possible to comment on drafts of work, not just work formally submitted. Most students indicated they fine with me checking in on their progress and providing “intermediate feedback” on their draft work.
  • It is possible to send assignments or announcements to subsets of students, which is useful for supporting personalized learning, group work, handling separate sections of the same class, and other scenarios. Two examples of this I made use of were: sending assignments to just the graduate students in my mixed graduate student/senior/junior course, and sending make-up quizzes to a subset of students.

A third set of reasons is that I feel Google Classroom is resource friendly.

  • Students and I do way less printing, saving paper and toner.
  • I spend less time collecting and returning documents to students in class.

Things that bothered me about Google Classroom were the following:

    • The Gradebook only supports recording grades for assignments actually released through Google Classroom, so it isn’t possible to add something like a “Participation Grade” into the Google Classroom gradebook.
    • If the Google ecosystem goes down, Google Classroom goes down (this is very rare though!)
    • I’m not sure whether students will have access to their submitted and graded materials after they graduate (since everything is in the Google Ecosystem provided by the University) unless they download and print materials out.
    • Students can unsubmit documents at arbitrary times (after you’ve started commenting). This isn’t too much of a problem since you can see they unsubmitted it, but it leaves the door open for some problematic cases. For example, you grade person A who gets a high grade; before you get around to grading person B, person B unsubmits and makes a change to look more like person A’s work. This can be resolved by just making sure to check for whether the work was ever unsubmitted or unsubmitted after the original deadline. If it was, one can see the document revisions
    • Google Classroom works best with Google documents, so if you have data in other types (Microsoft Word for example), it is best to convert them to Google’s types of documents first.

Podcast Links – Spring 2017

Over the past few years, I’ve started listening to a large number of podcasts. My subscriptions are increasingly focused on podcasts related to higher education (teaching and research) and time management. Thought it might be useful to list and link to some of them here!

Standards Based Grading: Semester Review, Part 1

As mentioned in a previous post, I employed standards-based grading (SBG) in my parallel programming course this semester. I will try to highlight in a few posts how I felt it went and how I will try to make use of it in the future. This post discusses some general student feedback on the use of SBG and discusses how I designed the class.

Twenty of the twenty-eight students in the course responded to a set of extra course evaluation questions that I provided, and of those, about 10 were positive to very positive. The most common positive comments were along the lines of:

  • The course assessment style reduced the stress of tests and allowed more focus on the material than cramming for tests
  • Reassessments allowed improving my understanding of the material over time
  • I think I will have a deeper understanding of the material after the course ends

The gist of the comments that were less positive were:

  • The grading scale and processes were initially confusing
  • Small errors on assessments leading to an “R” was frustrating
  • The approach seemed to be more work for the professor than typical courses (in time spent crafting assessments and grading)

I started off the course planning to cover and assess a total of 23 “knowledge” standards and 3 “programming” standards (one programming standard for each different parallel programming toolkit that was covered in the course). I ended up only assessing the students on 13 knowledge standards and the 3 programming standards. We covered some material related to additional standards during the last two or so weeks of class, but I didn’t end up assessing the students on that material.

I used the first half of most Tuesdays in the course as an assessment period, where students would take one or two assessments (chosen by me) and then could take one reassessment (chosen by the student). Students could reassess on a standard as many times as they wanted, under the constraint they could do at most one reassessment a week. Students had to submit an online form by noon on Mondays indicating which standard they wanted to be reassessed on. The midterm and final exam periods were reassessment periods where students could opt to do as many reassessments as they preferred on those days.

For the programming standards, I assigned 8 problems over the course of the semester that would take between 1 and 2 weeks each to complete. Students were required to submit 6, two for each the thre parallel programming APIs we learned about during the semester. The initial plan was that students, if they needed to be reassessed on a programming problem, could complete another of the 8 problems using the same toolkit as they had used in solving the original problem they were requesting reassessment on. Due to slowness on my part in getting some of the labs back (they took a while longer to score and provide useful feedback than I expected), I ended up allowing “correcting and resubmitting a given problem” to be used to raise an R grade to an M, or submission of a solution to another problem if the original score was an R or M and an the opportunity to earn up to an E was desired .

All grading on assessments and programming problems was done on an EMRN scale. Homeworks (called guided practice) were graded on an effort basis and consisted of 4-5 questions that related to a pre-lecture reading. One point per question was awarded if the student made a good faith effort on a question.

The overall semester grade was based primarily on how many standards a student achieved an M or E on. The homework grade influenced whether a + or – was awarded based on the percentage of homework grades earned. Students were given five tokens at the start of the semester. These could be submitted (1 token per programming problem) to have a 48-hour no-questions-asked deadline submission on the programming problems. Any remaining tokens at the end of the course could be submitted and credited towards guided practice points at a rate of 1.5 points per token.

Here is a list of the knowledge and programming standards that ended up being covered, and here is the overall semester grade scale. 05/31/2017 edit: Here is the rubric I used to grade programming problems.

I hope to write two more posts on this in the future:

  • Some statistics on the course (number of reassessments taken, etc).
  • Things I would change or like to try in a future SBG offering of the course and in using SBG in my other classes.

Assignment Analytics

Over this summer (summer 2017), I plan to perform some exploratory analytics regarding the completion of assignments for my spring 2017 classes. Assignment completion data can be extracted from many learning management systems, including Sakai and Canvas. In my case, I will be extracting the information from Google Classroom.

In both of my classes, there were required assignments and optional assignments.

  • Required assignments were assignments where students received a grade based on the assignment. The grade was either based on completion (the work had to show reasonable effort) or was based on some traditional scoring rubric.
  • Optional assignments were used to provide students additional practice on topics. Students did not receive a grade on optional assignments, but did receive feedback from the instructor. Feedback was almost always offered after submission of the optional assignment, and occasionally while students were still working on the assignment.

Feedback provided before a submission was due is possible given the fact that underlying Google Classroom is Google Docs, and the teacher of the course can co-edit and comment on assignment documents at any time. This also allows determining if a student did any work on an assignment, even it if was not formally submitted. Assignment submission effectively revokes further student editing. In Google Classroom, students can also “un-submit” work before it is graded, which is a means for supporting resubmissions. This is of interest in supporting the idea that students may discover an error on their own and revise their solutions. (Sakai, the traditional LMS we use at WFU, explicitly supports multiple submissions of an assignment as well).

Questions I plan to ask include:

  • For required assignments,
    • Percentage of students that complete an assignment
    • Assignment completion time relative to due date
    • Number of re-submissions before due date
  • For optional assignments (practice sets),
    • Percentage of students that complete an assignment
    • Percentage of students that complete any part of an assignment
    • Percentage of students that complete an assignment after intermediate feedback
    • Assignment completion time relative to the next assessment date
    • Number of students that completed k optional assignments (i.e. how many students completed 1 optional assignment, how many completed 2, etc.)
    • Intersection set of students that completed k optional assignments (i.e. is it the same subset of students completing each optional assignment or does it differ (based on perceived understanding or the like)?)

I hope to use the collected data to inform at least:

  • The utility of providing optional assignments (were they made use of?, were certain parts of them made use of?, were they used immediately after a topic was covered or as test review?)
  • The utility of providing intermediate feedback (did it encourage further work on an optional assignment)?
  • Spacing of assignment due dates within a course (were due dates too close?)
  • Appropriate assignment due times (does an 8am due time lead to early morning hour submissions)?

I likely can’t share much detailed data on this blog due to student privacy issues, but I hope to at least be able to summarize some high-level take-aways. My goal is to have some of this information available by the middle of July 2017.

Thoughts on crafting a recursion problem for teaching CS1/CS2

This semester (Spring 2017), I taught our CS2 course. About a week of this course covers recursion after an introduction to the topic in CS1. One problem I posed to my students I have found to be “nifty” and I plan to use similar problems in the future.

The students are first exposed to this version of a “find in array” function which treats the array as a recursive data-structure. It has a very “standard” feeling recursive structure: if-statement to check a base case, else-if-statement to check another base case, else-statement to handle the recursive case. (You may need to click on the image to enlarge it, then hit the back button to return to this post).

I then present the following recursive function (again, you may need to click on the image to enlarge it, then hit the back button to return to this post):

and ask the students to address the following questions.

  1. What is the evidence that this is a recursive function?
  2. In “simple sentences” (not code), describe the two scenarios when this function would return false?
  3. How is an array of some size N treated recursively – as a single element followed by a size N-1 array, or as two size N/2 arrays?
  4. Under what conditions will this function not make use of the recursive function call? You may want to refer back to the idea of lazy (AKA short-circuit) evaluation studied earlier in the course.

Students generally did very well on questions 1 and 3, but there were a number of errors on Problems 2 and 4. To help students in the right direction, I suggested they think about it as large X && (Y || Z) expression, and then to think about under what conditions that expression would a) return true or false, b) would be fully evaluated or not, and c) how a lazy evaluation analysis of the expression relates to the original recursive implementation.

I plan on using similar problems in the future as this style of problem exposes to the students the notion that not all recursion has the if-[else if]-else nature, and it requires thinking through the nature of how the AND operation, OR operation, and lazy evaluation work.

PythonTutor visualizations of each implementation are available via these links: standard version, lazy evaluation version