Hi AIRG folk,
At this week's AIRG meeting, we will discuss generalizations of Convolutional Neural Networks.
In a nutshell:
- CNNs work great for images because they exploit the mesh-like relationships between pixels. CNNs impose a useful inductive bias in that setting.
- However, there are many kinds of data where the variables are related, but those relationships are
messy -- they are described by some arbitrary graph, rather than a rectangular mesh.
- Is there a way to retain some of the benefits of CNNs in this generalized setting? How do we generalize a convolution to domains where "sliding windows" and "strides" don't make sense?
I recommend a couple of papers on this subject:
I plan to spend our time walking through some of the math, building intuition about how these things work.
See you on Wednesday!
-David Merrell