The Predictive Part of Predictive Processing

[Prerequisite: a working understanding of Bayes’ Theorem, and ideally some time spent playing with other machine learning techniques]

One of my favorite blogs has a review of the book “Surfing Uncertainty”, a somewhat accessible text about Predictive Processing.

Avi’s Glean: Awesome! This is a good model of the brain, and you should read it. It (probably) allows us to attach hard numbers to any layer of cognitive science.

That said, I will be writing blurbs critiquing various sections of the book review, and eventually of the book. My long term intent is to refine the theory for myself.

“As these two streams move through the brain side-by-side, they continually interface with each other. Each level receives the predictions from the level above it and the sense data from the level below it. Then each level uses Bayes’ Theorem to integrate these two sources of probabilistic evidence as best it can. This can end up a couple of different ways.”

Assuming that’s actually the theory and not just Scott’s interpretation, I have an intuition that this piece is where the theory is most wrong, the use of Bayes’ Theorem specifically. I think Bayes’ Theorem is posited here because:

1. It’s accurate.

2. It’s simple.

3. It’s what we understand.

But I don’t think that’s going to be correct because:

1. The brain isn’t necessarily going for what’s accurate. Accurate and effective can be divorced for a variety of reasons.

2. Neural layers are capable of much more nuanced and complex modeling (in your and my intuitions, in commonly held neural models, and specifically in the predictive processing model, this is true).

3. Since we’re in the early days of applied probability, there’s no reason, except as a placeholder, to guess what we know instead of what we don’t.

Leave a Reply

Your email address will not be published.