Post
1665
I’ve been working on a crazy theory for my first solo paper and I would appreciate some advice from leading researchers here:)
"Theory of Adaptive Learning"
Of all the deep learning algorithms at least to my knowledge, there’s none that fully covers the adaptive nature of intelligence. I believe it is a fundamental missing component of current AI governing laws.
I define it as a kind of learning wherein one person (say a student) adapts their framework of understanding to better suit that of what is being taught or said by another person/model (say a teacher). If we could measure the nature of this transfer learning. I believe it could help improve planning and reasoning capabilities of AI systems. If we look back at the theory evolution, adaptation is a fundamental component of human evolution. Today's so-called groundbreaking architectures or models, specifically large language models tend to have static parameters with constraints that are almost impossible to change or update in real-time after training. This fundamentally hinders their ability to reason, plan and accomplish objective-driven tasks as we humans do. Intelligence is dynamic.
Now this cannot be done with current autoregressive llms as their parameters are fixed with static constraints, even though RAG do help in updating model parameters in real-time but its basically cheating and doesn’t count as intelligence. There’s a pressing need for a natively adaptive architecture - The Goal of This Paper
"Theory of Adaptive Learning"
Of all the deep learning algorithms at least to my knowledge, there’s none that fully covers the adaptive nature of intelligence. I believe it is a fundamental missing component of current AI governing laws.
I define it as a kind of learning wherein one person (say a student) adapts their framework of understanding to better suit that of what is being taught or said by another person/model (say a teacher). If we could measure the nature of this transfer learning. I believe it could help improve planning and reasoning capabilities of AI systems. If we look back at the theory evolution, adaptation is a fundamental component of human evolution. Today's so-called groundbreaking architectures or models, specifically large language models tend to have static parameters with constraints that are almost impossible to change or update in real-time after training. This fundamentally hinders their ability to reason, plan and accomplish objective-driven tasks as we humans do. Intelligence is dynamic.
Now this cannot be done with current autoregressive llms as their parameters are fixed with static constraints, even though RAG do help in updating model parameters in real-time but its basically cheating and doesn’t count as intelligence. There’s a pressing need for a natively adaptive architecture - The Goal of This Paper