title
stringlengths 12
112
| published
stringlengths 19
23
| url
stringlengths 28
28
| video_id
stringlengths 11
11
| channel_id
stringclasses 5
values | id
stringlengths 16
31
| text
stringlengths 0
596
| start
float64 0
37.8k
| end
float64 2.18
37.8k
|
---|---|---|---|---|---|---|---|---|
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained) | 2021-02-27 15:47:03 | https://youtu.be/cllFzkvrYmE | cllFzkvrYmE | UCZHmQk67mSJgfCCTn7xBfew | cllFzkvrYmE-t3680.28 | I'm gonna just briefly talk about this. And he trashes the neuro symbolic people a bit | 3,680.28 | 3,694.92 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained) | 2021-02-27 15:47:03 | https://youtu.be/cllFzkvrYmE | cllFzkvrYmE | UCZHmQk67mSJgfCCTn7xBfew | cllFzkvrYmE-t3686.44 | like he trashes the people that say no, no, you know, neural networks can never do whatever. | 3,686.44 | 3,700.68 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained) | 2021-02-27 15:47:03 | https://youtu.be/cllFzkvrYmE | cllFzkvrYmE | UCZHmQk67mSJgfCCTn7xBfew | cllFzkvrYmE-t3694.92 | And he says pretty clearly look, neural networks can represent trees, I've given you a system | 3,694.92 | 3,708.72 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained) | 2021-02-27 15:47:03 | https://youtu.be/cllFzkvrYmE | cllFzkvrYmE | UCZHmQk67mSJgfCCTn7xBfew | cllFzkvrYmE-t3700.6800000000003 | also BERT can output parse trees. So shut up, I guess. And he comes up with this glom | 3,700.68 | 3,716.04 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained) | 2021-02-27 15:47:03 | https://youtu.be/cllFzkvrYmE | cllFzkvrYmE | UCZHmQk67mSJgfCCTn7xBfew | cllFzkvrYmE-t3708.72 | BERT name, which, you know, is is already coined. If you wanted to do glom BERT, that's | 3,708.72 | 3,728.92 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained) | 2021-02-27 15:47:03 | https://youtu.be/cllFzkvrYmE | cllFzkvrYmE | UCZHmQk67mSJgfCCTn7xBfew | cllFzkvrYmE-t3716.04 | already taken Sorry. I also by the way, also coined the I coined the name may glow mania | 3,716.04 | 3,733.96 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained) | 2021-02-27 15:47:03 | https://youtu.be/cllFzkvrYmE | cllFzkvrYmE | UCZHmQk67mSJgfCCTn7xBfew | cllFzkvrYmE-t3728.9199999999996 | right now. Okay, if you want to, if you want to use it, it better be a pretty cool machine | 3,728.92 | 3,742.44 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained) | 2021-02-27 15:47:03 | https://youtu.be/cllFzkvrYmE | cllFzkvrYmE | UCZHmQk67mSJgfCCTn7xBfew | cllFzkvrYmE-t3733.96 | learning system and be based on glom. Right? That was the paper. I think it's a cool system. | 3,733.96 | 3,747.28 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained) | 2021-02-27 15:47:03 | https://youtu.be/cllFzkvrYmE | cllFzkvrYmE | UCZHmQk67mSJgfCCTn7xBfew | cllFzkvrYmE-t3742.44 | It has a bunch of parts that are maybe not super friendly to hardware at the time like | 3,742.44 | 3,752.24 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained) | 2021-02-27 15:47:03 | https://youtu.be/cllFzkvrYmE | cllFzkvrYmE | UCZHmQk67mSJgfCCTn7xBfew | cllFzkvrYmE-t3747.28 | this iterative procedure. But honestly, it is not much more than a neural network. Sorry, | 3,747.28 | 3,759.36 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained) | 2021-02-27 15:47:03 | https://youtu.be/cllFzkvrYmE | cllFzkvrYmE | UCZHmQk67mSJgfCCTn7xBfew | cllFzkvrYmE-t3752.2400000000002 | a recurrent neural network with very complicated recurrence functions. The video extension | 3,752.24 | 3,765 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained) | 2021-02-27 15:47:03 | https://youtu.be/cllFzkvrYmE | cllFzkvrYmE | UCZHmQk67mSJgfCCTn7xBfew | cllFzkvrYmE-t3759.36 | might be a bit tricky. And, but the rest and the regularization might be a bit tricky, | 3,759.36 | 3,769.8 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained) | 2021-02-27 15:47:03 | https://youtu.be/cllFzkvrYmE | cllFzkvrYmE | UCZHmQk67mSJgfCCTn7xBfew | cllFzkvrYmE-t3765.0 | the exact objective. So the denoising auto encoder objective isn't super detailed in | 3,765 | 3,776.36 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained) | 2021-02-27 15:47:03 | https://youtu.be/cllFzkvrYmE | cllFzkvrYmE | UCZHmQk67mSJgfCCTn7xBfew | cllFzkvrYmE-t3769.8 | the paper, he simply says, reconstruct a corrupted version of the input. How exactly the input | 3,769.8 | 3,782 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained) | 2021-02-27 15:47:03 | https://youtu.be/cllFzkvrYmE | cllFzkvrYmE | UCZHmQk67mSJgfCCTn7xBfew | cllFzkvrYmE-t3776.36 | happens, maybe there's a CNN, maybe the CNN feeds information into actually multiple layers. | 3,776.36 | 3,788.92 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained) | 2021-02-27 15:47:03 | https://youtu.be/cllFzkvrYmE | cllFzkvrYmE | UCZHmQk67mSJgfCCTn7xBfew | cllFzkvrYmE-t3782.0 | None of that is exactly specified. So there's lots to figure out. I do think the ideas are | 3,782 | 3,797.32 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained) | 2021-02-27 15:47:03 | https://youtu.be/cllFzkvrYmE | cllFzkvrYmE | UCZHmQk67mSJgfCCTn7xBfew | cllFzkvrYmE-t3788.92 | very cool. And I love idea papers. And therefore I recommend that if you're interested more, | 3,788.92 | 3,802.6 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained) | 2021-02-27 15:47:03 | https://youtu.be/cllFzkvrYmE | cllFzkvrYmE | UCZHmQk67mSJgfCCTn7xBfew | cllFzkvrYmE-t3797.32 | give this thing a read, give this video a like, share it out. And I'll see you next | 3,797.32 | 3,819.72 |
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained) | 2021-02-27 15:47:03 | https://youtu.be/cllFzkvrYmE | cllFzkvrYmE | UCZHmQk67mSJgfCCTn7xBfew | cllFzkvrYmE-t3802.6 | 3,802.6 | 3,819.72 |