Skip to content

Post mortem, autumn 2016

Oleg Vasilev edited this page Sep 13, 2017 · 3 revisions

Dear diary, A few days ago (6, tbh) we finished the autumn course. 'twas mostly terrible. Here are a few 'hits' and 'misses' i made.

Overall

  • Most people (70%+, self-assessed) are able to solve the standard "classify image/text"/"generate sequence"/whatever problems
  • Some people (25%, self-assessed) are merely familiar with some keywords, which may or may not help them later in their research/job. I can't claim which extent of that was due to poor teaching and which was them not investing the time/effort.
  • There are roughly 4-5 students who dove into the topic and went on studying it beyond the basic curriculum. That's awesome. But these are the people who would have done well even if i just told 'em "Guys, you may want to hack some deep learning. Here are some useful links" (no data to back that).
  • projects failed (as an incentive). The only people who did them would have done them anyway.
  • we really messed up with the grading system. The result was overcomplicated since we didn't discard the unhelpful rules

Coverage

  • maybe split week1 into two weeks: one about the core principles (& manual backprop) and one about the tricks (& theano)
  • [several student opinions] maybe need a dedicated [maybe optional] material on manual backprop
  • [several student opinions] add some words (and mb optional assignment) about other approaches to NN training. Contrastive divergence & RBM, DNI, irprop, alternative backprop techniques.
  • [no data] generative models & lidl's seminar should be given after the basic bayesian methods seminar cuz of some dependencies
  • [one student opinion] maybe shift deep RL part to an earlier timeslot and push generative models further?
  • [jh self-assessment] probably need to cover autoencoders a bit earlier. Perhaps near the advanced vision section.
  • [jh self-assessment] maybe add the differentiable data structures stuff from dl-course
  • maybe rearrange lectures to mix the math-heavy stuff with a case-study stuff

Lectures

  • Surprisingly, student's own opinion supports informal lecture style. I still wonder if they are as efficient as normal ones, but at least they're not repelling ppl. There are some 2 Strong No's though (vs 11 strong yes).
  • compile PDFs from the offset
  • add some recommended reading after the lecture for those top10% self-motivated ppl
  • [e. sokolov advice] add even more explanatory materials on convolutions @week2
  • [jh self-assessment] lecture 4 is okay, but probably need more interactive part to facilitate RNN understanding
  • [jh self-assessment] lectures 6, lecture

Seminars

  • Students support the "What's wrong with this network" idea. Need to make more of these.
  • May want to showcase some of the best homeworks [no data to back].

Homeworks

  • No data to back it, but i believe that "Fill in your code" style homeworks ought to be mixed with "Write me from scratch" ones to make students understand the whole pipeline.
  • HW0 should remain simple to incentivize doing bonus stuff. Perhaps incentivize bonuses even more explicitly.
  • splitting old mnist-hw into two was [seemingly] a good idea, both appeared as tasks of moderate complexity [from student responses]
  • [grading table] base assignments (hw0-hw4) were accomplished by the majority of students, while the second part went largely unnoticed
  • fix the hw5 mess up :)
  • Publishing almost-solved HWs should be a no
  • The same is valid about HW lore parts

other

  • maybe the current grading system actually discourages people from doing the second half of homeworks
  • should think 5 freaking times before introducing a new grading rule mid-course
  • moar autotests & checkers
  • could've been much worse :)