Derek Zahn on my personal knol:
>"Given limited resources, there must be a trade-off
between the number & the length of connections in such network..."
>I don't understand what you mean by "length" here... It seems that the topology etc of the network would be the important properties, not physical measurements...
Here I assume that innate topology of neocortex is roughly the same, - genetic variation among individuals is very minor. On the other hand, the “dense vs sparse” bias (genetic or perinatal) requires very little information, see “Developmental factors” section. Of course, adult topology is largely acquired, but acquisition process itself is affected by innate biases.
By “length” (of axons) I mean average distance between connected nodes (minicolumns?), in whatever topology. Given a fixed total length of connections (resources), greater average length of individual one-to-one connection must come at the cost of smaller total number of these connections. Think of spindle neurons, - very few but very long connections. So, this would produce sparser network with longer-range & more selective associations (concepts). Selection itself is probably through some variation of Hebbian “fire together- wire together”.
Let’s face it, the brain is physical, its resources are limited, there are trade-offs to be made. Again, this knol is on gross neural bias only, I deal with algorithmic level (not necessarily neuromorphic) on my “Intelligence” knol.
EditDeleteReport abusive comment
DeleteBlock this userReport abusive comment
EditDeleteReport abusive comment
DeleteBlock this userReport abusive comment
EditDeleteReport abusive comment
DeleteBlock this userReport abusive comment
EditDeleteReport abusive comment
DeleteBlock this userReport abusive comment
EditDeleteReport abusive comment
DeleteBlock this userReport abusive comment
EditDeleteReport abusive comment
DeleteBlock this userReport abusive comment
EditDeleteReport abusive comment
DeleteBlock this userReport abusive comment
EditDeleteReport abusive comment
DeleteBlock this userReport abusive comment
EditDeleteReport abusive comment
DeleteBlock this userReport abusive comment
EditDeleteReport abusive comment
DeleteBlock this userReport abusive comment
EditDeleteReport abusive comment
DeleteBlock this userReport abusive comment
EditDeleteReport abusive comment
DeleteBlock this userReport abusive comment
EditDeleteReport abusive comment
DeleteBlock this userReport abusive comment
EditDeleteReport abusive comment
DeleteBlock this userReport abusive comment
EditDeleteReport abusive comment
DeleteBlock this userReport abusive comment
EditDeleteReport abusive comment
DeleteBlock this userReport abusive comment
>I don't understand what you mean by "length" here... It seems that the topology etc of the network would be the important properties, not physical measurements...
Here I assume that innate topology of neocortex is roughly the same, - genetic variation among individuals is very minor. On the other hand, the “dense vs sparse” bias (genetic or perinatal) requires very little information, see “Developmental factors” section. Of course, adult topology is largely acquired, but acquisition process itself is affected by innate biases.
By “length” (of axons) I mean average distance between connected nodes (minicolumns?), in whatever topology. Given a fixed total length of connections (resources), greater average length of individual one-to-one connection must come at the cost of smaller total number of these connections. Think of spindle neurons, - very few but very long connections. So, this would produce sparser network with longer-range & more selective associations (concepts). Selection itself is probably through some variation of Hebbian “fire together- wire together”.
Let’s face it, the brain is physical, its resources are limited, there are trade-offs to be made. Again, this knol is on gross neural bias only, I deal with algorithmic level (not necessarily neuromorphic) on my “Intelligence” knol.
Todor Armaudov:
Cognitive abilities peaks. Saturation of learning and Generalization novelty seeking.
Hi
Boris,
I'm not ready to discuss on neurological stuff, but I could on the years of peak of cognitive abilities.
I think some of the abilities are "flat" or at least could be "emulated" without deep generalization, and their peak might be more likely dependent on social status, aim at power and focus rather than special generalization shift. Science has also a social status bug, because usually researchers are supposed and they do accept to serve their master's directions until 30s (PhD, post-doc ...)
Language and stories maybe don't have that deep hierarchy or so, I don't know, but I think gifted writers and poets may reach to high or "perfect" skills as early as their 20s or even teenage years. ("Perfect" means there's not much more where to go in style and how to tell a story interestingly.)
This can happen even without reading lots of sample fiction. Acknowledgement or time-span needed to write influential works may take decades, though.
Also, you call art "fluff", but I believe talent to write stories includes a good deal of generalization.
I think art is an imitation of algorithms; the worst authors copy data, the talented and original ones induce and understand algorithms (patterns) that could generate plausible data, their algorithms are more robust and are harder to reverse-engineer given only the artwork.
(I agree that for writing literature critics, reading lots of books through many years is helpful, though.)
Maybe I'm just an exception, but was authoring pretty high generality stuff at age of 17-18-19 such as philosophy (including "my theory"); [science] fiction and fantasy with philosophical elements; was doing language engineering (lexical and semantic enrichment of Bulgarian), "genre and style" engineering, and solid sociolinguistics research.
It was a peak, and I have explanation why it declined in the following years: saturation & distractors. :)
There's a phenomenon I call exhaustion or saturation of learning. Saturation is not only cognitive (e.g. you don't get social support for the activities and give up), but there is a crucial cognitive part which is related to boredom and the conditions where the cognitive algorithm should skip too predictive patterns. That's a form of novelty seeking, and I think it contributes to shift to higher generality concepts after lower ones are saturated.
When mind extracts patterns from a given domain (set of raw data/patterns), initially it does fast and improves quickly. This can be either at same level of generalization (reaching high predictability & precision) and multi-level - increasingly abstract generalizations are discovered. However the process slows down in both directions, eventually at the highest level of generalization discovered. Mind cannot find higher level of generalization, gets bored and tends to switch to new domains in order to find:
- more unpredictable/comple x patterns, starting from lowest level
- steeper function of generality increase (until another saturation)
I'd call this "Generalization novelty seeking"
I suspect persons who have higher tendency to search for inter-domain generality and the fast learners don't freeze in one single domain because they feel such saturation of generalization.
The general knowledge gotten is reused between domains and makes learning of new domains faster. Eventually domains run-out and merge, and this is accelerated by the inter-domain generalizations showing that different things are the same thing with different names.
After inter-domain saturation mind has no choice but to concentrate on higher concepts from the now merged domains, that seemed saturated before, and try to generalize further. Otherwise it would just be bored to death... :)
The not-that-inter-domai n learners tend to focus to make one or a few
narrow domains "perfect". They don't care or don't notice that most of the time
the progress is very slow or none, they're reaching precision and generalization
limits and doing the same thing over and over again with no improve.
I'm not ready to discuss on neurological stuff, but I could on the years of peak of cognitive abilities.
I think some of the abilities are "flat" or at least could be "emulated" without deep generalization, and their peak might be more likely dependent on social status, aim at power and focus rather than special generalization shift. Science has also a social status bug, because usually researchers are supposed and they do accept to serve their master's directions until 30s (PhD, post-doc ...)
Language and stories maybe don't have that deep hierarchy or so, I don't know, but I think gifted writers and poets may reach to high or "perfect" skills as early as their 20s or even teenage years. ("Perfect" means there's not much more where to go in style and how to tell a story interestingly.)
This can happen even without reading lots of sample fiction. Acknowledgement or time-span needed to write influential works may take decades, though.
Also, you call art "fluff", but I believe talent to write stories includes a good deal of generalization.
I think art is an imitation of algorithms; the worst authors copy data, the talented and original ones induce and understand algorithms (patterns) that could generate plausible data, their algorithms are more robust and are harder to reverse-engineer given only the artwork.
(I agree that for writing literature critics, reading lots of books through many years is helpful, though.)
Maybe I'm just an exception, but was authoring pretty high generality stuff at age of 17-18-19 such as philosophy (including "my theory"); [science] fiction and fantasy with philosophical elements; was doing language engineering (lexical and semantic enrichment of Bulgarian), "genre and style" engineering, and solid sociolinguistics research.
It was a peak, and I have explanation why it declined in the following years: saturation & distractors. :)
There's a phenomenon I call exhaustion or saturation of learning. Saturation is not only cognitive (e.g. you don't get social support for the activities and give up), but there is a crucial cognitive part which is related to boredom and the conditions where the cognitive algorithm should skip too predictive patterns. That's a form of novelty seeking, and I think it contributes to shift to higher generality concepts after lower ones are saturated.
When mind extracts patterns from a given domain (set of raw data/patterns), initially it does fast and improves quickly. This can be either at same level of generalization (reaching high predictability & precision) and multi-level - increasingly abstract generalizations are discovered. However the process slows down in both directions, eventually at the highest level of generalization discovered. Mind cannot find higher level of generalization, gets bored and tends to switch to new domains in order to find:
- more unpredictable/comple
- steeper function of generality increase (until another saturation)
I'd call this "Generalization novelty seeking"
I suspect persons who have higher tendency to search for inter-domain generality and the fast learners don't freeze in one single domain because they feel such saturation of generalization.
The general knowledge gotten is reused between domains and makes learning of new domains faster. Eventually domains run-out and merge, and this is accelerated by the inter-domain generalizations showing that different things are the same thing with different names.
After inter-domain saturation mind has no choice but to concentrate on higher concepts from the now merged domains, that seemed saturated before, and try to generalize further. Otherwise it would just be bored to death... :)
The not-that-inter-domai
Last edited Jul 2, 2010 5:06
PM
DeleteBlock this userReport abusive comment
> I
think some of the abilities are "flat" or at least could be "emulated" without
deep generalization, and their peak might be more likely dependent on social
status, aim at power and focus rather than special generalization shift. Science
has also a social status bug, because usually researchers are supposed and they
do accept to serve their master's directions until 30s (PhD, post-doc
...)
As I mentioned elsewhere, I try to focus on cognitive factors.
> Language and stories maybe don't have that deep hierarchy or so, I don't know, but I think gifted writers and poets may reach to high or "perfect" skills as early as their 20s or even teenage years. ("Perfect" means there's not much more where to go in style and how to tell a story interestingly.)
Poets, more likely than novelists (form vs. content). Just because they all write doesn’t mean it on the same level of generalization. Anyway, I’d rather not discuss art.
> Maybe I'm just an exception, but was authoring pretty high generality stuff at age of 17-18-19 such as philosophy (including "my theory"); [science] fiction and fantasy with philosophical elements; was doing language engineering (lexical and semantic enrichment of Bulgarian), "genre and style" engineering, and solid sociolinguistics research.
There's a phenomenon I call exhaustion or saturation of learning. Saturation is not only cognitive (e.g. you don't get social support for the activities and give up), but there is a crucial cognitive part which is related to boredom and the conditions where the cognitive algorithm should skip too predictive patterns.
I was talking about the age of highest achievement. Just because you had energy, ambition, & did some work doesn’t mean you achieved much. My experience with your writing (including this comment) suggests that you’re after quantity rather than quality. It seems like a “specialist bias” to me, even as you’re trying to generalize. I think you got bored because you *did not* find any predictive patterns, had no patience to continue, & nobody paid any attention.
> That's a form of novelty seeking, and I think it contributes to shift to higher generality concepts after lower ones are saturated.
That’s a form of parroting, with wrong conclusions, & on the wrong knol.
> When mind extracts patterns from a given domain (set of raw data/patterns), initially it does fast and improves quickly. This can be either at same level of generalization (reaching high predictability & precision) and multi-level - increasingly abstract generalizations are discovered. However the process slows down in both directions, eventually at the highest level of generalization discovered. Mind cannot find higher level of generalization, gets bored and tends to switch to new domains in order to find:
You didn’t explain why discontinuous search (jumping domains) would speed up generalization. “Boredom”, “saturation” are great pop-psych terms to *obscure* the subject.
> - more unpredictable/comple x patterns,
starting from lowest level
>- steeper function of generality increase (until another saturation)....
Confused. It’s ironic that I, a social science major who hasn’t even *seen* a computer till 22 yo, got into formalizing bottom-up pattern discovery when younger than you are. You started programming at, what, 10 yo? This discussion belongs on the “cognitive algorithm” knol, but please don’t comment till you can suggest quantifiable criteria. I really don’t need more distractions.
As I mentioned elsewhere, I try to focus on cognitive factors.
> Language and stories maybe don't have that deep hierarchy or so, I don't know, but I think gifted writers and poets may reach to high or "perfect" skills as early as their 20s or even teenage years. ("Perfect" means there's not much more where to go in style and how to tell a story interestingly.)
Poets, more likely than novelists (form vs. content). Just because they all write doesn’t mean it on the same level of generalization. Anyway, I’d rather not discuss art.
> Maybe I'm just an exception, but was authoring pretty high generality stuff at age of 17-18-19 such as philosophy (including "my theory"); [science] fiction and fantasy with philosophical elements; was doing language engineering (lexical and semantic enrichment of Bulgarian), "genre and style" engineering, and solid sociolinguistics research.
There's a phenomenon I call exhaustion or saturation of learning. Saturation is not only cognitive (e.g. you don't get social support for the activities and give up), but there is a crucial cognitive part which is related to boredom and the conditions where the cognitive algorithm should skip too predictive patterns.
I was talking about the age of highest achievement. Just because you had energy, ambition, & did some work doesn’t mean you achieved much. My experience with your writing (including this comment) suggests that you’re after quantity rather than quality. It seems like a “specialist bias” to me, even as you’re trying to generalize. I think you got bored because you *did not* find any predictive patterns, had no patience to continue, & nobody paid any attention.
> That's a form of novelty seeking, and I think it contributes to shift to higher generality concepts after lower ones are saturated.
That’s a form of parroting, with wrong conclusions, & on the wrong knol.
> When mind extracts patterns from a given domain (set of raw data/patterns), initially it does fast and improves quickly. This can be either at same level of generalization (reaching high predictability & precision) and multi-level - increasingly abstract generalizations are discovered. However the process slows down in both directions, eventually at the highest level of generalization discovered. Mind cannot find higher level of generalization, gets bored and tends to switch to new domains in order to find:
You didn’t explain why discontinuous search (jumping domains) would speed up generalization. “Boredom”, “saturation” are great pop-psych terms to *obscure* the subject.
> - more unpredictable/comple
>- steeper function of generality increase (until another saturation)....
Confused. It’s ironic that I, a social science major who hasn’t even *seen* a computer till 22 yo, got into formalizing bottom-up pattern discovery when younger than you are. You started programming at, what, 10 yo? This discussion belongs on the “cognitive algorithm” knol, but please don’t comment till you can suggest quantifiable criteria. I really don’t need more distractions.
EditDeleteReport abusive comment
I'm
leaving you and myself alone with my abusive nonsenses, if you wish delete my
comments,
but sorry - your overgeneralized offensive nonsenses are wrong.
>I was talking about the age of highest achievement. Just because you had energy, ambition, & did some work doesn’t mean you achieved much. My experience because you had energy, ambition, & did some work doesn’t mean you achieved much. My experience
>with your writing (including this comment) suggests that you’re after quantity rather than quality. It seems like a “specialist bias” to me, even as you’re
>trying to generalize. I think you got bored because you *did not* find any predictive patterns, had no patience to continue, & nobody paid any attention.
I engaged a professional die-hard 40+ yo philosopher and philosophy writer with opposite POV to discuss in many long letters with me, while I had read a few high school textbooks and magically inducing "working" philosophy from my mind, that a "master" like him couldn't "defeat".
Inventions from my linguistics/sociolin guistics were cited in at least two scientific papers
(I wouldn't join the "mainstream" though, because I didn't have education and
was criticizing their bad terminology). I befriended a leading sociolinguist
(40+ yo), had a comprehensive and solid conclusion work called "The Decline of
the language of Bulgarian society".
My electronic dictionary with enriched Bulgarian has thousands of downloads and keep counting; was uploaded at download space and published in a magazine by somebody else's promotion. I was interviewed for another magazine before that, for newspapers, radio and surprisingly to you I *denied* a request for a TV interview, when I was already bored of understanding and knew this won't help to achieve what my research aimed at and what it concluded.
>got into formalizing bottom-up pattern discovery when younger than you are
Because I've never really tried to formalize it, I guess.
BTW, I can speak about the language of social scientists and you apparently don't sound as a typical SS graduate. In Bulgarian it seemed like a trivial common-sense descriptive blah-blah, I was disappointed it can have so low underlying complexity (I was a high school student in a *technical* school and could understand and explain that stuff better than those graduate social idiots). The writings were just masked with a bunch of pointless foreignisms they call "terminology" - to prove it was a "science". I suspect it was the same in USSR.
Bye
but sorry - your overgeneralized offensive nonsenses are wrong.
>I was talking about the age of highest achievement. Just because you had energy, ambition, & did some work doesn’t mean you achieved much. My experience because you had energy, ambition, & did some work doesn’t mean you achieved much. My experience
>with your writing (including this comment) suggests that you’re after quantity rather than quality. It seems like a “specialist bias” to me, even as you’re
>trying to generalize. I think you got bored because you *did not* find any predictive patterns, had no patience to continue, & nobody paid any attention.
I engaged a professional die-hard 40+ yo philosopher and philosophy writer with opposite POV to discuss in many long letters with me, while I had read a few high school textbooks and magically inducing "working" philosophy from my mind, that a "master" like him couldn't "defeat".
Inventions from my linguistics/sociolin
My electronic dictionary with enriched Bulgarian has thousands of downloads and keep counting; was uploaded at download space and published in a magazine by somebody else's promotion. I was interviewed for another magazine before that, for newspapers, radio and surprisingly to you I *denied* a request for a TV interview, when I was already bored of understanding and knew this won't help to achieve what my research aimed at and what it concluded.
>got into formalizing bottom-up pattern discovery when younger than you are
Because I've never really tried to formalize it, I guess.
BTW, I can speak about the language of social scientists and you apparently don't sound as a typical SS graduate. In Bulgarian it seemed like a trivial common-sense descriptive blah-blah, I was disappointed it can have so low underlying complexity (I was a high school student in a *technical* school and could understand and explain that stuff better than those graduate social idiots). The writings were just masked with a bunch of pointless foreignisms they call "terminology" - to prove it was a "science". I suspect it was the same in USSR.
Bye
DeleteBlock this userReport abusive comment
Todor,
your comments aren’t abusive, just distracting. My replies are often offensive,
but that’s how you deal with distractions. If I didn’t think there’s a slight
chance you might eventually contribute, I wouldn’t reply at all. My
“overgeneralization” is simply a matter of selective focus.
>> I engaged a professional die-hard 40+ yo philosopher and philosophy writer with opposite POV to discuss in many long letters with me, while I had read a few high school textbooks and magically inducing "working" philosophy from my mind, that a "master" like him couldn't "defeat".
...
Not much of an achievement, is it?
>> you apparently don't sound as a typical SS graduate.
Much of my SS background was in the US, that was a lifetime ago, & I am not typical anything.
>>got into formalizing bottom-up pattern discovery when younger than you are
>Because I've never really tried to formalize it, I guess.
There's a reason for that.
Look, I am sorry to keep offending you, but it’s not personal (though you do have a very annoying habit of self-promotion). There’s only one thing worth focusing on, & I don’t care if I offend the rest of humanity to do so.
>> I engaged a professional die-hard 40+ yo philosopher and philosophy writer with opposite POV to discuss in many long letters with me, while I had read a few high school textbooks and magically inducing "working" philosophy from my mind, that a "master" like him couldn't "defeat".
...
Not much of an achievement, is it?
>> you apparently don't sound as a typical SS graduate.
Much of my SS background was in the US, that was a lifetime ago, & I am not typical anything.
>>got into formalizing bottom-up pattern discovery when younger than you are
>Because I've never really tried to formalize it, I guess.
There's a reason for that.
Look, I am sorry to keep offending you, but it’s not personal (though you do have a very annoying habit of self-promotion). There’s only one thing worth focusing on, & I don’t care if I offend the rest of humanity to do so.
EditDeleteReport abusive comment
>My
“overgeneralization” is simply a matter of selective focus
It's anti-promotion of your methodology - makes wrong predictions.
>Not much of an achievement, is it?
Of course that was about *no body paid any attention*. Content is too long a topic & I'm tired of self-promotion to explain. I hadn't seen anything phil. to really surprise me after these years (or just an year).
It's anti-promotion of your methodology - makes wrong predictions.
>Not much of an achievement, is it?
Of course that was about *no body paid any attention*. Content is too long a topic & I'm tired of self-promotion to explain. I hadn't seen anything phil. to really surprise me after these years (or just an year).
DeleteBlock this userReport abusive comment
> It's
anti-promotion of your methodology - makes wrong predictions.
You have a point there, I never heard of a social scientist or a philosopher doing much to formalize cognition. But that may have more to do with dysfunctional nature of social institutions that represent these fields. Yes, just about all writers on the subject have math, CS, EE backgrounds. But that might be because they have tangible accomplishments in their fields, which gives them the confidence to tackle such grand ambitions, & convinces people to pay attention. And these fields are closely related on a basic level, - it's all formal information processing. But the level of complexity is vastly different, so the results aren't impressive. Basicaly, there's no institutionalized field that's fit for the problem. It's like science in the Middle Ages, if you want to do it, you're on your own.
> Content is too long a topic & I'm tired of self-promotion to explain.
There's a difference between self-promotion & writing-up ideas in a coherent form. If you cared enough about the content to actually work on it, just give me a link. As it is, your writings in Bulgarian is just chatter & story-telling, you're not translating them because they're not worth it (& google does it pretty well). It feels strange to keep hearing about "your theory" as if it was some kind of intellectual status symbol. It's all about you & your accomplishments, & next to nothing about the subject matter.
You have a point there, I never heard of a social scientist or a philosopher doing much to formalize cognition. But that may have more to do with dysfunctional nature of social institutions that represent these fields. Yes, just about all writers on the subject have math, CS, EE backgrounds. But that might be because they have tangible accomplishments in their fields, which gives them the confidence to tackle such grand ambitions, & convinces people to pay attention. And these fields are closely related on a basic level, - it's all formal information processing. But the level of complexity is vastly different, so the results aren't impressive. Basicaly, there's no institutionalized field that's fit for the problem. It's like science in the Middle Ages, if you want to do it, you're on your own.
> Content is too long a topic & I'm tired of self-promotion to explain.
There's a difference between self-promotion & writing-up ideas in a coherent form. If you cared enough about the content to actually work on it, just give me a link. As it is, your writings in Bulgarian is just chatter & story-telling, you're not translating them because they're not worth it (& google does it pretty well). It feels strange to keep hearing about "your theory" as if it was some kind of intellectual status symbol. It's all about you & your accomplishments, & next to nothing about the subject matter.
EditDeleteReport abusive comment
>Basicaly, there's no institutionalized field that's
fit for the problem. It's like science in the Middle
>Ages, if you want to do it, you're on your own.
Right.
>> Content is too long a topic & I'm tired of self-promotion to explain.
>There's a difference between self-promotion & writing-up ideas in a coherent form. If you cared
>enough about the content to actually work on it, just give me a link. As it is, your writings in
>Bulgarian is just chatter & story-telling, you're not translating them because they're not worth it (&
>google does it pretty well). It feels strange to keep hearing about "your theory" as if it was some kind
>of intellectual status symbol. It's all about you & your accomplishments, & next to nothing about the
>subject matter.
I'm bored to discuss on this too. I guess sometimes it's a defense, like "leave me alone, I'm busy! I'm not ready! I'm not focussed! (and never had been)"
While you are focused on the only most significant etc. thing, I've been busy and focused on *many* most significant things a day.
Yes - I don't think it's worth the time translation in that form and no one would read that long s*, I apparently have more important things to do & should compress it to next to nothing or something.
However your definition is again overgeneralized, because as a philosophy my s* was fine. I wouldn't be here if it was complete junk & whatever more I say, would be self-promotion. I'm sick & tired of this, want to be constructive, sorry for self-promotion and the spam.
One last spam: a question from a student. He said he was amused by your sentence "you need boring life" and others, and asked:
- Is Boris' theory falsifiable? Where his confidence comes from?
Funny, isn't it. I didn't have an answer. I've asked you in the past also, you answered "implementation is trivial, once you have a formal theory". Great, and what if you work 40 years to find your formal theory was wrong.
>Ages, if you want to do it, you're on your own.
Right.
>> Content is too long a topic & I'm tired of self-promotion to explain.
>There's a difference between self-promotion & writing-up ideas in a coherent form. If you cared
>enough about the content to actually work on it, just give me a link. As it is, your writings in
>Bulgarian is just chatter & story-telling, you're not translating them because they're not worth it (&
>google does it pretty well). It feels strange to keep hearing about "your theory" as if it was some kind
>of intellectual status symbol. It's all about you & your accomplishments, & next to nothing about the
>subject matter.
I'm bored to discuss on this too. I guess sometimes it's a defense, like "leave me alone, I'm busy! I'm not ready! I'm not focussed! (and never had been)"
While you are focused on the only most significant etc. thing, I've been busy and focused on *many* most significant things a day.
Yes - I don't think it's worth the time translation in that form and no one would read that long s*, I apparently have more important things to do & should compress it to next to nothing or something.
However your definition is again overgeneralized, because as a philosophy my s* was fine. I wouldn't be here if it was complete junk & whatever more I say, would be self-promotion. I'm sick & tired of this, want to be constructive, sorry for self-promotion and the spam.
One last spam: a question from a student. He said he was amused by your sentence "you need boring life" and others, and asked:
- Is Boris' theory falsifiable? Where his confidence comes from?
Funny, isn't it. I didn't have an answer. I've asked you in the past also, you answered "implementation is trivial, once you have a formal theory". Great, and what if you work 40 years to find your formal theory was wrong.
DeleteBlock this userReport abusive comment
> Is
Boris' theory falsifiable?
Ah, yes, Popperian philosophy. It’s wrong, both corroboration & falsification are a matter of degree. Think of it in Bayesian terms: facts don’t prove or disprove empirical theory, they simply increase or decrease its predictive value. In my approach, the empirical part is the definition of intelligence, & “falsifying” means finding some essential function of intelligence (in common-sense terms) that it doesn’t cover. Then I would have to generalize the definition, but that’s what I am doing anyway. The rest of my theory is deductions from the definition, where the test is not facts but internal consistency (as in math).
> Where his confidence comes from?
Initially, probably my mother (she is pretty unique, very high oxytocin & serotonin to cortisol ratio:)). To develop long attention span, you need to have confidence to begin with, otherwise you’re stuck with 4Fs & computer games. That’s why I am skeptical about engineers, & then hard vs. soft science types. They can’t sustain their curiosity without short-term feedback, - tests, proofs, experiments, action. Intellectual insecurity.
But of course that’s just a start, confidence needs to be constantly reinforced. So, yes, successful engineers may develop confidence that affords them longer no-feedback attention span, & gain more of a “generalist bias”. But I don’t know if that can reverse early development, - the brain is not that plastic anymore. For me, the confidence is reinforced by the simple fact that I understand the issues better than anyone I’ve heard of. Kind of like you with your philosophers :).
> Great, and what if you work 40 years to find your formal theory was wrong.
I would spend 40,000 years if I had them, - nothing else is worth doing. It’s not a matter of right & wrong, it’s about making progress, faster than anyone else. Stop thinking in holistic terms, there’s no immutable “theory”. I generalize a problem & deduce solutions, it’s an incremental process. I am smarter than evolution & don’t need no stupid trial & error outside of my head. That’s where intelligence is, right?
Ah, yes, Popperian philosophy. It’s wrong, both corroboration & falsification are a matter of degree. Think of it in Bayesian terms: facts don’t prove or disprove empirical theory, they simply increase or decrease its predictive value. In my approach, the empirical part is the definition of intelligence, & “falsifying” means finding some essential function of intelligence (in common-sense terms) that it doesn’t cover. Then I would have to generalize the definition, but that’s what I am doing anyway. The rest of my theory is deductions from the definition, where the test is not facts but internal consistency (as in math).
> Where his confidence comes from?
Initially, probably my mother (she is pretty unique, very high oxytocin & serotonin to cortisol ratio:)). To develop long attention span, you need to have confidence to begin with, otherwise you’re stuck with 4Fs & computer games. That’s why I am skeptical about engineers, & then hard vs. soft science types. They can’t sustain their curiosity without short-term feedback, - tests, proofs, experiments, action. Intellectual insecurity.
But of course that’s just a start, confidence needs to be constantly reinforced. So, yes, successful engineers may develop confidence that affords them longer no-feedback attention span, & gain more of a “generalist bias”. But I don’t know if that can reverse early development, - the brain is not that plastic anymore. For me, the confidence is reinforced by the simple fact that I understand the issues better than anyone I’ve heard of. Kind of like you with your philosophers :).
> Great, and what if you work 40 years to find your formal theory was wrong.
I would spend 40,000 years if I had them, - nothing else is worth doing. It’s not a matter of right & wrong, it’s about making progress, faster than anyone else. Stop thinking in holistic terms, there’s no immutable “theory”. I generalize a problem & deduce solutions, it’s an incremental process. I am smarter than evolution & don’t need no stupid trial & error outside of my head. That’s where intelligence is, right?
EditDeleteReport abusive comment
Thanks
for the answers!
>That’s why I am skeptical about engineers, & then hard vs. soft science types. They can’t sustain
> their curiosity without short-term feedback, - tests, proofs, experiments, action. Intellectual
> insecurity.
I also don't like too pure hard science types if that means they're too formal, too “mathematical” and focused only in details, but to me the best scientists are interdisciplinary hybrids.
>>Where his confidence comes from?
>Initially, probably my mother (she is pretty unique, very high oxytocin & serotonin to cortisol
> ratio:))
Indeed, you've once said "love is a stupid addiction". I agreed then, but recently felt ashamed of this and reconsidered “stupid”. To you everything is stupid and a waste of time, but the causes of the start of the neurotransmitter addictive cycle in real love are cognitive: matches and predictions, she's like you've wanted her to be, like made for you. Love at first sight is possible (empirically proven), also one can fall in love before puberty.
Addiction is with a reason, because while the beginning and the end of love are destructive to mind, the middle is stable and puts neurotransmitters and hormones in a comfort state (this is a desired state, addiction helps to keep you there); encourages focus on the most significant one, instead of doing novelty seeking; there's a dedicated long-lasting intellectual partner to discuss with and not the least, there's plenty of oxytocin in the neurotransmitter soup.
>But of course that’s just a start, confidence needs to be constantly reinforced. So, yes,
> successful engineers may develop confidence that affords them longer no-feedback attention
> span, & gain more of a “generalist bias”. But I don’t know if that can reverse early development, -
> the brain is not that plastic anymore. For me, the confidence is reinforced by the simple fact that I
> understand the issues better than anyone I’ve heard of. Kind of like you with your philosophers :).
Regarding the ages of peaks, if not mistaken Marx & Engels did the initial core of their work in their 20s (1840s). OK ,"it's not much". :)
>That’s why I am skeptical about engineers, & then hard vs. soft science types. They can’t sustain
> their curiosity without short-term feedback, - tests, proofs, experiments, action. Intellectual
> insecurity.
I also don't like too pure hard science types if that means they're too formal, too “mathematical” and focused only in details, but to me the best scientists are interdisciplinary hybrids.
>>Where his confidence comes from?
>Initially, probably my mother (she is pretty unique, very high oxytocin & serotonin to cortisol
> ratio:))
Indeed, you've once said "love is a stupid addiction". I agreed then, but recently felt ashamed of this and reconsidered “stupid”. To you everything is stupid and a waste of time, but the causes of the start of the neurotransmitter addictive cycle in real love are cognitive: matches and predictions, she's like you've wanted her to be, like made for you. Love at first sight is possible (empirically proven), also one can fall in love before puberty.
Addiction is with a reason, because while the beginning and the end of love are destructive to mind, the middle is stable and puts neurotransmitters and hormones in a comfort state (this is a desired state, addiction helps to keep you there); encourages focus on the most significant one, instead of doing novelty seeking; there's a dedicated long-lasting intellectual partner to discuss with and not the least, there's plenty of oxytocin in the neurotransmitter soup.
>But of course that’s just a start, confidence needs to be constantly reinforced. So, yes,
> successful engineers may develop confidence that affords them longer no-feedback attention
> span, & gain more of a “generalist bias”. But I don’t know if that can reverse early development, -
> the brain is not that plastic anymore. For me, the confidence is reinforced by the simple fact that I
> understand the issues better than anyone I’ve heard of. Kind of like you with your philosophers :).
Regarding the ages of peaks, if not mistaken Marx & Engels did the initial core of their work in their 20s (1840s). OK ,"it's not much". :)
DeleteBlock this userReport abusive comment
> the
best scientists are interdisciplinary hybrids
Not necessarily as scientists, definitely not now. One of the reasons people go into hard sciences is the promise of certainty, something you don't get when you go into meta-science.
> Indeed, you've once said "love is a stupid addiction"
Damn, I gave you an excuse to go off about love again. Addiction is when you focus on things you shouldn't be focusing on, even if that puts you in a "comfort state".
> Marx & Engels did the initial core of their work in their 20s (1840s). OK ,"it's not much". :)
Right. Ideology is really a form of art: the purpose is to make an impression, not to make sense. But this is not an excuse for you to go off about art:).
Not necessarily as scientists, definitely not now. One of the reasons people go into hard sciences is the promise of certainty, something you don't get when you go into meta-science.
> Indeed, you've once said "love is a stupid addiction"
Damn, I gave you an excuse to go off about love again. Addiction is when you focus on things you shouldn't be focusing on, even if that puts you in a "comfort state".
> Marx & Engels did the initial core of their work in their 20s (1840s). OK ,"it's not much". :)
Right. Ideology is really a form of art: the purpose is to make an impression, not to make sense. But this is not an excuse for you to go off about art:).
EditDeleteReport abusive comment
>>
Indeed, you've once said "love is a stupid addiction"
>Damn, I gave you an excuse to go off about love again. Addiction is when you focus on things you shouldn't be focusing on, even if that puts you in a
>"comfort state".
But by putting you in a "comfort state", some addictions might be indirectly helpful for other purposes for ones without hormonal or neurotransmitter genetic advantages.
>> Marx & Engels did the initial core of their work in their 20s (1840s). OK ,"it's not much". :)
>Right. Ideology is really a form of art: the purpose is to make an
>impression, not to make sense. But this is not an excuse for you to go
>off about art:).
Communism is an ideology, but Dialectical Materialism is a philosophy.
It's quite general in definitions, but if not more, DM at least notices important aspects such as mind imitating its inputs and builds itself on them, emergent behavior; evolution, hierarchical organization of matter and transition to higher levels of matter with preserving "good" from the past; from specific to general/abstract etc.
>Damn, I gave you an excuse to go off about love again. Addiction is when you focus on things you shouldn't be focusing on, even if that puts you in a
>"comfort state".
But by putting you in a "comfort state", some addictions might be indirectly helpful for other purposes for ones without hormonal or neurotransmitter genetic advantages.
>> Marx & Engels did the initial core of their work in their 20s (1840s). OK ,"it's not much". :)
>Right. Ideology is really a form of art: the purpose is to make an
>impression, not to make sense. But this is not an excuse for you to go
>off about art:).
Communism is an ideology, but Dialectical Materialism is a philosophy.
It's quite general in definitions, but if not more, DM at least notices important aspects such as mind imitating its inputs and builds itself on them, emergent behavior; evolution, hierarchical organization of matter and transition to higher levels of matter with preserving "good" from the past; from specific to general/abstract etc.
DeleteBlock this userReport abusive comment
> But
by putting you in a "comfort state", some addictions might be indirectly helpful
for other purposes for ones without hormonal or neurotransmitter genetic
advantages.
Good luck with that, but I don't think she'll leave you in that state long enough. Women are practical people, they have their priorities & stimulating you abstract thoughts not likely to be one of them (although they're great at faking common interests at first).
> Communism is an ideology, but Dialectical Materialism is a philosophy
DM was afterthought for Marx (he never actually used the term), mostly borrowed Positivism, fashionable at the time. He was a "philosophical" rabble-rouser at heart. "Dialectical" part is meaningless, & "Materialism" is just an excuse to trash religion (not that there's anything wrong with that).
Good luck with that, but I don't think she'll leave you in that state long enough. Women are practical people, they have their priorities & stimulating you abstract thoughts not likely to be one of them (although they're great at faking common interests at first).
> Communism is an ideology, but Dialectical Materialism is a philosophy
DM was afterthought for Marx (he never actually used the term), mostly borrowed Positivism, fashionable at the time. He was a "philosophical" rabble-rouser at heart. "Dialectical" part is meaningless, & "Materialism" is just an excuse to trash religion (not that there's anything wrong with that).
EditDeleteReport abusive comment
Thanks...
Actually "love's no friend" of mine, I've been trying to kill it because it has
been killing me, but I don't manage yet - except in artworks. :) She's a deep
character though, plot-twists are possible.
This is about romantic love, but I guess the other kinds - friendship, friendliness, empathy and socialization in general are providing healthy neurotransmitters and hormones as well (a speculation). It's an "anti-reclusive health strategy" if your brain fails to generate the right chemicals while feeling lonely.
EDIT: BTW, actually after reading a bit on the topic of oxytocin, I suspect I may have high oxycotin as well, even when lonely or in love unrequittedly. Maybe that's why I don't succeed in killing love and keep falling in.
>DM was afterthought for Marx (he never actually used the term), mostly borrowed Positivism,
>fashionable at the time.
OK, I guess it's more of Lenin than Marx.
>He was a "philosophical" rabble-rouser at heart. "Dialectical" part is meaningless, & "Materialism" is
>just an excuse to trash religion (not that there's anything wrong with that).
I do see low-complexity, obvious definitions and plays with words. I guess one of the basics, the never ending fight between two opposites/contradict ions can be
derived from the minimal possible number of different elements: 2 + the basic DM
assumption of never-ending motion/change (and maybe the ideology assumption of
class struggle).
Ah, and another question from Georgi. I've told him what you've told me you're doing for a living and why.
He asked:
Georgi: Isn't it a dangerous job? What's going to happen to your advanced yet unintelligible-by-ot hers work if an accident happens or so? It
might be lost for the world. [Do you care?]
This is about romantic love, but I guess the other kinds - friendship, friendliness, empathy and socialization in general are providing healthy neurotransmitters and hormones as well (a speculation). It's an "anti-reclusive health strategy" if your brain fails to generate the right chemicals while feeling lonely.
EDIT: BTW, actually after reading a bit on the topic of oxytocin, I suspect I may have high oxycotin as well, even when lonely or in love unrequittedly. Maybe that's why I don't succeed in killing love and keep falling in.
>DM was afterthought for Marx (he never actually used the term), mostly borrowed Positivism,
>fashionable at the time.
OK, I guess it's more of Lenin than Marx.
>He was a "philosophical" rabble-rouser at heart. "Dialectical" part is meaningless, & "Materialism" is
>just an excuse to trash religion (not that there's anything wrong with that).
I do see low-complexity, obvious definitions and plays with words. I guess one of the basics, the never ending fight between two opposites/contradict
Ah, and another question from Georgi. I've told him what you've told me you're doing for a living and why.
He asked:
Georgi: Isn't it a dangerous job? What's going to happen to your advanced yet unintelligible-by-ot
DeleteBlock this userReport abusive comment
I said my
mother must've had high oxytocin, definitely not me. Opposite effect. Anyway,
all that neurobabble is pointless, use your common sense. Kicking the "life"
habit is like kicking any other habit. It's hard at first, but if you stick to
it, *&* work semi-productively, the work will take over as the main habit.
Lose life, move to the countryside or something. Try it, go on vacation, anything else is just an excuse.
The best confidence is the one you gain by doing real work.
Appreciate the concern, but my job is not dangerous at all, & only takes ~1 hour of my time per shift. But even that is excessive, I seriously consider quitting it.
Lose life, move to the countryside or something. Try it, go on vacation, anything else is just an excuse.
The best confidence is the one you gain by doing real work.
Appreciate the concern, but my job is not dangerous at all, & only takes ~1 hour of my time per shift. But even that is excessive, I seriously consider quitting it.
EditDeleteReport abusive comment
>I
said my mother must've had high oxytocin, definitely not me. Opposite
effect.
OK - I presumed oxytocin effects might be linked to your health and confidence.
>Appreciate the concern, but my job is not dangerous at all, & only takes ~1 hour of my time per
>shift. But even that is excessive, I seriously consider quitting it.
:)
>Lose life, move to the countryside or something. Try it, go on vacation
That's a good idea, I've been considering it (in the mountains) and may try it this summer, but it probably would be short. It may be a silly excuse to you, but I couldn't sustain long a living with my savings and current scarce earnings.
You give prizes, but it's risky because reaching there may take me too long and I may fail.
In order to relax, I need a back-up plan and financial security... :-|
OK - I presumed oxytocin effects might be linked to your health and confidence.
>Appreciate the concern, but my job is not dangerous at all, & only takes ~1 hour of my time per
>shift. But even that is excessive, I seriously consider quitting it.
:)
>Lose life, move to the countryside or something. Try it, go on vacation
That's a good idea, I've been considering it (in the mountains) and may try it this summer, but it probably would be short. It may be a silly excuse to you, but I couldn't sustain long a living with my savings and current scarce earnings.
You give prizes, but it's risky because reaching there may take me too long and I may fail.
In order to relax, I need a back-up plan and financial security... :-|
DeleteBlock this userReport abusive comment
> OK -
I presumed oxytocin effects might be linked to your health and
confidence.
No, that's probably early serotonin exposure (http://www.raysahel ian.com/serotonin.ht ml.) Serotonin
is upstream from oxytocin & far more general (oxytocin is specific to social
interactions). That pop-sci article on your blog totally ignores it, probably
because the effects are too complex & not as flashy. The "peace of mind"
effect is receptor-specific & not well understood. My point is, you can have
a peace of mind without going through all the nonsense of social interaction :).
Another thing that article totally misinterpreted is the fact the oxytocin is anti-addictive.
What that means is, prior exposure to oxytocin will make you crave social support less, not more, so you may get less social :).
> You give prizes, but it's risky because reaching there may take me too long and I may fail. In order to relax, I need a back-up plan and financial security... :-|
Of course it risky, anything worthwhile is. But consider the alternative: you'll never make a difference. You're so far behind, any delay means you may never catch up. Are you willing to take that risk? Relaxing is more a matter of lifestyle & immediate environment than financial security (heck, you can grow you own potatoes :)).
Anyway, I sent you a loan to be paid by a future prize, just to show that I am serious.
You owe me an insight :).
No, that's probably early serotonin exposure (http://www.raysahel
Another thing that article totally misinterpreted is the fact the oxytocin is anti-addictive.
What that means is, prior exposure to oxytocin will make you crave social support less, not more, so you may get less social :).
> You give prizes, but it's risky because reaching there may take me too long and I may fail. In order to relax, I need a back-up plan and financial security... :-|
Of course it risky, anything worthwhile is. But consider the alternative: you'll never make a difference. You're so far behind, any delay means you may never catch up. Are you willing to take that risk? Relaxing is more a matter of lifestyle & immediate environment than financial security (heck, you can grow you own potatoes :)).
Anyway, I sent you a loan to be paid by a future prize, just to show that I am serious.
You owe me an insight :).
EditDeleteReport abusive comment
>>
OK - I presumed oxytocin effects might be linked to your health and
confidence.
>No, that's probably early serotonin exposure
>(http://www.raysahe lian.com/serotonin.h tml.) Serotonin
is upstream from
>oxytocin & far more general (oxytocin is specific to social interactions).
OK (social interactions - "animate objects"... )
>That pop-sci article on your blog totally ignores it, probably because the
>effects are too complex & not as flashy. The "peace of mind" effect is
>receptor-specific & not well understood.
Thanks for reading!
>My point is, you can have a peace of mind without going through all the
>nonsense of social interaction :).
Right - ascetics, monks...
>Another thing that article totally misinterpreted is the fact the oxytocin
>is anti-addictive. What that means is, prior exposure to oxytocin will make
>you crave social support less, not more, so you may get less social :).
I think there are clues for this from life: picking-up a girlfriend and "real love" often causes less interest in socializing with other people, but The One; also - loosing touch with friends.
>Of course it risky, anything worthwhile is. But consider the alternative:
>you'll never make a difference. You're so far behind, any delay means you
>may never catch up. Are you willing to take that risk? Relaxing is more a
>matter of lifestyle & immediate environment than financial security (heck,
>you can grow you own potatoes :)).
>Anyway, I sent you a loan to be paid by a future prize, just to show that I
>am serious.
>You owe me an insight :).
Thank you for your generosity and expectations! :)
Hope to deserve the loan... I just couldn't do too radical things immediately and for too long.
Collaboration may be found, I already met a core of smart and attracted students, who are willing to keep the communication and continue discussions out of class. I plan to have open lectures and another more advanced and focused course or two in the University next year, it may include lessons on generalizing (hope to progress by then), and brain-storming/gener alizing
real problems together.
Regarding love, it's being distracting and hurting, but it may happen to be helpful for concentration anyway - if it fails to be because of oxytocin, it may succeed by turning into action my maxim: "I'm most inspired when I'm most despaired"...
>No, that's probably early serotonin exposure
>(http://www.raysahe
>oxytocin & far more general (oxytocin is specific to social interactions).
OK (social interactions - "animate objects"... )
>That pop-sci article on your blog totally ignores it, probably because the
>effects are too complex & not as flashy. The "peace of mind" effect is
>receptor-specific & not well understood.
Thanks for reading!
>My point is, you can have a peace of mind without going through all the
>nonsense of social interaction :).
Right - ascetics, monks...
>Another thing that article totally misinterpreted is the fact the oxytocin
>is anti-addictive. What that means is, prior exposure to oxytocin will make
>you crave social support less, not more, so you may get less social :).
I think there are clues for this from life: picking-up a girlfriend and "real love" often causes less interest in socializing with other people, but The One; also - loosing touch with friends.
>Of course it risky, anything worthwhile is. But consider the alternative:
>you'll never make a difference. You're so far behind, any delay means you
>may never catch up. Are you willing to take that risk? Relaxing is more a
>matter of lifestyle & immediate environment than financial security (heck,
>you can grow you own potatoes :)).
>Anyway, I sent you a loan to be paid by a future prize, just to show that I
>am serious.
>You owe me an insight :).
Thank you for your generosity and expectations! :)
Hope to deserve the loan... I just couldn't do too radical things immediately and for too long.
Collaboration may be found, I already met a core of smart and attracted students, who are willing to keep the communication and continue discussions out of class. I plan to have open lectures and another more advanced and focused course or two in the University next year, it may include lessons on generalizing (hope to progress by then), and brain-storming/gener
Regarding love, it's being distracting and hurting, but it may happen to be helpful for concentration anyway - if it fails to be because of oxytocin, it may succeed by turning into action my maxim: "I'm most inspired when I'm most despaired"...
DeleteBlock this userReport abusive comment
> "I'm
most inspired when I'm most despaired"...
Probably in the wrong direction, desperation shrink attention span. Good read: http://www.theameric anscholar.org/solitu de-and-leadership/
Probably in the wrong direction, desperation shrink attention span. Good read: http://www.theameric
EditDeleteReport abusive comment
Thanks
for the link. I agree with the essay, with some exceptions: would discuss about
multi-tasking and social networks, don't think it's Black and
White.
>> "I'm most inspired when I'm most despaired"...
>Probably in the wrong direction, desperation shrink attention span.
It's perhaps quite poetic & sentimental, there's a continuation of the maxim: "I'm the most inspired creator" ~ the most despaired, it shouldn't be a regular despair. Some grief and understanding of sort of hopelessness (prediction of inevitably undesirable outcomes) is not a real (chemical) despair, more likely I do Repair and Inspire again.
There might be a "vector" of different inspirations/despira itions [of ...]. I've
got some thoughts that could be related to such a "vector", distractors, what
you call ADD bias/search for novel specifics, domain jumping and "threads" in
mind (mind as not really integrated system), but I'd think some more.
If not mistaken this direction is related to the 4-th level in your cognitive hierarchy, you've mentioned Economy of cognitive resources or something.
>> "I'm most inspired when I'm most despaired"...
>Probably in the wrong direction, desperation shrink attention span.
It's perhaps quite poetic & sentimental, there's a continuation of the maxim: "I'm the most inspired creator" ~ the most despaired, it shouldn't be a regular despair. Some grief and understanding of sort of hopelessness (prediction of inevitably undesirable outcomes) is not a real (chemical) despair, more likely I do Repair and Inspire again.
There might be a "vector" of different inspirations/despira
If not mistaken this direction is related to the 4-th level in your cognitive hierarchy, you've mentioned Economy of cognitive resources or something.
DeleteBlock this userReport abusive comment
>
There might be a "vector" of different inspirations/despira itions [of ...].
I've got some thoughts that could be related to such a "vector", distractors,
what you call ADD bias/search for novel specifics, domain jumping and "threads"
in mind (mind as not really integrated system),
Right, desperation might lead to radical change, & you definitely need it, but use every excuse in the universe to avoid.
Novelty seeking has many different aspects, you need to be analytical about it.
In my interpretation, valuable “novelty” is actually an incrementally abstract type of correspondence.
> If not mistaken this direction is related to the 4-th level in your cognitive hierarchy, you've mentioned Economy of cognitive resources or something.
Those levels are way out of date, too coarse & analogical. Resource allocation is what every level does.
My work now is strictly quantitative & incremental, the levels are defined by the type of correspondence they select for. Higher types are recurrent subsets of lower types.
The first four are: magnitude ) matched magnitude ) projected match ) additional projection...
Try to formalize those, on the "cognitive algorithm" knol :).
Right, desperation might lead to radical change, & you definitely need it, but use every excuse in the universe to avoid.
Novelty seeking has many different aspects, you need to be analytical about it.
In my interpretation, valuable “novelty” is actually an incrementally abstract type of correspondence.
> If not mistaken this direction is related to the 4-th level in your cognitive hierarchy, you've mentioned Economy of cognitive resources or something.
Those levels are way out of date, too coarse & analogical. Resource allocation is what every level does.
My work now is strictly quantitative & incremental, the levels are defined by the type of correspondence they select for. Higher types are recurrent subsets of lower types.
The first four are: magnitude ) matched magnitude ) projected match ) additional projection...
Try to formalize those, on the "cognitive algorithm" knol :).
EditDeleteReport abusive comment
>desperation might lead to radical change, & you
definitely need it,
>but use every excuse in the universe to avoid.
I've been doing lots of things yet and probably would do, but I think I do progress to the right direction. A part of my "distractors" in the last half an year or so and yet have been reading/studying and preparing and conducting the AGI course.
However I'm tired of reading and it won't get work done, I've always preferred thinking on my own, so this time-slice is about to go to active mode.
>Novelty seeking has many different aspects, you need to be analytical about it.
OK.
>In my interpretation, valuable “novelty” is actually an incrementally
>abstract type of correspondence.
I supposed so - "novel generality"; maybe novelty that allows inducing novel generality. I think searching and getting lots of samples may help to find it, though. Having lots of similar patterns in mind promotes compression and generalization.
>> If not mistaken this direction is related to the 4-th level in your
>>cognitive hierarchy, you've mentioned Economy of cognitive resources or
>>something.
>Those levels are way out of date, too coarse & analogical. Resource
>allocation is what every level does.
OK. :)
>My work now is strictly quantitative & incremental, the levels are defined
>by the type of correspondence they select for. Higher types are recurrent
>subsets of lower types.
>The first four are: magnitude ) matched magnitude ) projected match )
>additional projection...
>Try to formalize those, on the "cognitive algorithm" knol :).
Nice, thanks. :) I haven't forgotten the other one, as well; have reflected, but too little, yet.
BTW, what's your line on Ben Goertzel? Trying to cover him, but don't sure for how long I will sustain. To me Schmidhuber's "tune" is better. I like Goertzel as an enthusiastic guru and popularizer, but he seems to be strongly influenced by high-level Cognitive science & NLP... Maybe partially that's because he's quite impatient to sell products immediately, and cognitive architecture style is more socially acceptable/"apparent ly should be working"/"pop-sci".. .
>but use every excuse in the universe to avoid.
I've been doing lots of things yet and probably would do, but I think I do progress to the right direction. A part of my "distractors" in the last half an year or so and yet have been reading/studying and preparing and conducting the AGI course.
However I'm tired of reading and it won't get work done, I've always preferred thinking on my own, so this time-slice is about to go to active mode.
>Novelty seeking has many different aspects, you need to be analytical about it.
OK.
>In my interpretation, valuable “novelty” is actually an incrementally
>abstract type of correspondence.
I supposed so - "novel generality"; maybe novelty that allows inducing novel generality. I think searching and getting lots of samples may help to find it, though. Having lots of similar patterns in mind promotes compression and generalization.
>> If not mistaken this direction is related to the 4-th level in your
>>cognitive hierarchy, you've mentioned Economy of cognitive resources or
>>something.
>Those levels are way out of date, too coarse & analogical. Resource
>allocation is what every level does.
OK. :)
>My work now is strictly quantitative & incremental, the levels are defined
>by the type of correspondence they select for. Higher types are recurrent
>subsets of lower types.
>The first four are: magnitude ) matched magnitude ) projected match )
>additional projection...
>Try to formalize those, on the "cognitive algorithm" knol :).
Nice, thanks. :) I haven't forgotten the other one, as well; have reflected, but too little, yet.
BTW, what's your line on Ben Goertzel? Trying to cover him, but don't sure for how long I will sustain. To me Schmidhuber's "tune" is better. I like Goertzel as an enthusiastic guru and popularizer, but he seems to be strongly influenced by high-level Cognitive science & NLP... Maybe partially that's because he's quite impatient to sell products immediately, and cognitive architecture style is more socially acceptable/"apparent
DeleteBlock this userReport abusive comment
Reading
is great, I just can't find anything useful
> maybe novelty that allows inducing novel generality
Right, but that accumulation of data must be increasingly selective. Novelty corresponds to spatial discontinuity in input flow. That’s macro-selection, & syntactic differentiation of past inputs by comparison is micro-selection. Discontinuous input coordinate selection is directed by projection: the value of syntactic re-integration at that coordinate. Now, try to formalize those operations.
> BTW, what's your line on Ben Goertzel? Trying to cover him, but don't sure for how long I will sustain. To me Schmidhuber's "tune" is better.
Geortzel is a social butterfly / tinkerer. His definition of intelligence is meaningless, he has no theory & doesn’t think he needs one, - “it’s an engineering problem”. Schmidhuber is more coherent, but he doesn’t get *incremental*. Mathematicians are trained to deal with complex operations, they think starting simple is beneath them. Yet, without simple incremental steps there’s no scalability. Anyway, I’d rather discuss issues than people.
> maybe novelty that allows inducing novel generality
Right, but that accumulation of data must be increasingly selective. Novelty corresponds to spatial discontinuity in input flow. That’s macro-selection, & syntactic differentiation of past inputs by comparison is micro-selection. Discontinuous input coordinate selection is directed by projection: the value of syntactic re-integration at that coordinate. Now, try to formalize those operations.
> BTW, what's your line on Ben Goertzel? Trying to cover him, but don't sure for how long I will sustain. To me Schmidhuber's "tune" is better.
Geortzel is a social butterfly / tinkerer. His definition of intelligence is meaningless, he has no theory & doesn’t think he needs one, - “it’s an engineering problem”. Schmidhuber is more coherent, but he doesn’t get *incremental*. Mathematicians are trained to deal with complex operations, they think starting simple is beneath them. Yet, without simple incremental steps there’s no scalability. Anyway, I’d rather discuss issues than people.
EditDeleteReport abusive comment
>>
maybe novelty that allows inducing novel generality
>Right, but that accumulation of data must be increasingly selective.
And I guess, unlike "normal novelty", this kind of novelty may be found by re-evaluation/focus on very old recorded data, leading to "Eureka!". Generalization is lossy, thus lower-generality records should have more details/features to select from.
>Anyway, I’d rather discuss issues than people.
Fine, I meant Goertzel's work; you did: the issues on "engineering problem", mathematicians' training and incrementability.
>Now, try to formalize those operations.
OK, got tough tasks - will comment in Cognitive algorithm knol when got something to say on...
>Right, but that accumulation of data must be increasingly selective.
And I guess, unlike "normal novelty", this kind of novelty may be found by re-evaluation/focus on very old recorded data, leading to "Eureka!". Generalization is lossy, thus lower-generality records should have more details/features to select from.
>Anyway, I’d rather discuss issues than people.
Fine, I meant Goertzel's work; you did: the issues on "engineering problem", mathematicians' training and incrementability.
>Now, try to formalize those operations.
OK, got tough tasks - will comment in Cognitive algorithm knol when got something to say on...
DeleteBlock this userReport abusive comment
> And
I guess, unlike "normal novelty", this kind of novelty may be found by
re-evaluation/focus on very old recorded data,
Again, the oldest data would be on the highest levels, although there can be another hierarchy of storage costs within each level. On the first level the cost also includes default comparison, then it's only storage, from new: RAM, to old: tape. The original form of novely seeking would be to to actually "look" at the new locations. That would require motor feedback, but yes, it's not principally different from feedback within the hierarchy.
So, older inputs should be displaced in FIFO order into cheaper storage, as long as the cost of transfer & storage declines faster than predictive value of the inputs.
I guess I was wrong to dismiss your idea of buffering old inputs.
Congratulations, you won the first prize! (did you have any problems with PayPal?). It’s worth more than $100, but the idea itself is simple, it needs to be justified in terms of costs vs. benefits.
Again, the oldest data would be on the highest levels, although there can be another hierarchy of storage costs within each level. On the first level the cost also includes default comparison, then it's only storage, from new: RAM, to old: tape. The original form of novely seeking would be to to actually "look" at the new locations. That would require motor feedback, but yes, it's not principally different from feedback within the hierarchy.
So, older inputs should be displaced in FIFO order into cheaper storage, as long as the cost of transfer & storage declines faster than predictive value of the inputs.
I guess I was wrong to dismiss your idea of buffering old inputs.
Congratulations, you won the first prize! (did you have any problems with PayPal?). It’s worth more than $100, but the idea itself is simple, it needs to be justified in terms of costs vs. benefits.
EditDeleteReport abusive comment
>>
And I guess, unlike "normal novelty", this kind of novelty may be found by
re-evaluation/focus on very old recorded data,
>Again, the oldest data would be on the highest levels, although there can be another hierarchy of storage costs within each level.
I think I see - higher level --> higher range of patterns in space and time.
In this comment I meant something else - a hippocampal-style playback, sort of conscious(?) recall/re-evaluation . Records in memory are built from [sequences] of
hierarchical pieces/concepts. When playing back, mind may select pieces at a
given level of abstraction from one or many different records, which themselves
were selectively recalled in a sequence. More abstract pieces/concepts may be
induced from these pieces and the new higher concepts may be recorded back to
the old memories.
I.e. mind may do this by introspection on data which are in memory anyway, no need to read or search in the world. (In the context of your line: "Reading is great, I just can't find anything useful")
Also I think it's related to the issue with wise books read too early. E.g. if a little boy reads at school Exupery's "The Little Prince", he's not likely to understand the metaphors and deep meanings, but he may remember the stories literally. Many years later if he recalls them even without re-reading, he might understand their moral. This particular phenomenon might not be "inducing", but "matching" to already understood higher concepts, though.
>On the first level the cost also includes default comparison, then it's only >storage, from new: RAM, to old: tape. The original form of novely seeking >would be to to actually "look" at the new locations. That would require >motor feedback, but yes, it's not principally different from feedback within >the hierarchy. So, older inputs should be displaced in FIFO order into >cheaper storage, as long as the cost of transfer & storage declines faster >than predictive value of the inputs.
>I guess I was wrong to dismiss your idea of buffering old inputs.
>Congratulations, you won the first prize!
Thanks! :)
>It’s worth more than $100, but the idea itself is simple, it needs to be
>justified in terms of costs vs. benefits.
OK :)
>(did you have any problems with PayPal?).
Unfortunately I did - sent you an email.
>Again, the oldest data would be on the highest levels, although there can be another hierarchy of storage costs within each level.
I think I see - higher level --> higher range of patterns in space and time.
In this comment I meant something else - a hippocampal-style playback, sort of conscious(?) recall/re-evaluation
I.e. mind may do this by introspection on data which are in memory anyway, no need to read or search in the world. (In the context of your line: "Reading is great, I just can't find anything useful")
Also I think it's related to the issue with wise books read too early. E.g. if a little boy reads at school Exupery's "The Little Prince", he's not likely to understand the metaphors and deep meanings, but he may remember the stories literally. Many years later if he recalls them even without re-reading, he might understand their moral. This particular phenomenon might not be "inducing", but "matching" to already understood higher concepts, though.
>On the first level the cost also includes default comparison, then it's only >storage, from new: RAM, to old: tape. The original form of novely seeking >would be to to actually "look" at the new locations. That would require >motor feedback, but yes, it's not principally different from feedback within >the hierarchy. So, older inputs should be displaced in FIFO order into >cheaper storage, as long as the cost of transfer & storage declines faster >than predictive value of the inputs.
>I guess I was wrong to dismiss your idea of buffering old inputs.
>Congratulations, you won the first prize!
Thanks! :)
>It’s worth more than $100, but the idea itself is simple, it needs to be
>justified in terms of costs vs. benefits.
OK :)
>(did you have any problems with PayPal?).
Unfortunately I did - sent you an email.
DeleteBlock this userReport abusive comment
> In
this comment I meant something else - a hippocampal-style playback, sort of
conscious(?) recall/re-evaluation ...
Damn, you've gone all analogical on me again :).
I understand, you're talking about extended storage within each level of detail | generalization, in case memory origin's location becomes relevant in the future. In my understanding, a level is ordered as a FIFO: proximity = priority (the sequence may include spatial frames, or whatever). New inputs displace the old ones till they get pushed out of the queue: selectively elevated, & deleted as obsolete on the current level.
That still stands, but I realized that this push-out should be multi-stage, - into less expensive memory (if available) intstead of immediate deletion.
The first queue must be short because it's very expensive: all inputs are immediately compared, generating redundant representations (overlaping derivatives). The following stage queues can be much longer because they're cheaper: inputs are stored but not compared unless their location comes into "focus" again.
Damn, you've gone all analogical on me again :).
I understand, you're talking about extended storage within each level of detail | generalization, in case memory origin's location becomes relevant in the future. In my understanding, a level is ordered as a FIFO: proximity = priority (the sequence may include spatial frames, or whatever). New inputs displace the old ones till they get pushed out of the queue: selectively elevated, & deleted as obsolete on the current level.
That still stands, but I realized that this push-out should be multi-stage, - into less expensive memory (if available) intstead of immediate deletion.
The first queue must be short because it's very expensive: all inputs are immediately compared, generating redundant representations (overlaping derivatives). The following stage queues can be much longer because they're cheaper: inputs are stored but not compared unless their location comes into "focus" again.
EditDeleteReport abusive comment
>Damn,
you've gone all analogical on me again :).
>I understand, you're talking about extended storage within each level of
>detail | generalization, in case memory origin's location becomes relevant
>in the future.
Nice. :) I haven't thought with these precise terms though (origin's location, I remember you mentioned once in a recent posting about hippocampus in my blog). if you wish and have patience, check out the translation of my old speculations and some new.
Regarding the prize and buffers, longer look-back buffers for recent inputs (seems at any levels for any purpose) also for locations (context) would be an advantage for faster pattern discovery allowing track back if needed, very useful for patterns extended in time ("delayed"), with which my speculations in the knol began.
Anyway, memory was a milestones in my old writings, a starting point, such as a theoretical induction of neocortex and hippocampus (-like modules/effects) based on the evidence from consciousness as biographical memory and the fact that mind does learn and perform increasingly better before & without having such memories in the early age.
I assumed hippocampus-style-me mory, or
"Events Operating System/Memory" (EOS) is somewhat higher level in mind than
"Executive Operating System/Memory" (EXOS, neocortex and patterns
there).
EOS effects are more abstract than of "Executive OS" (EXOS, neocortex), because EOS is an add-on to neocortex, I suspect there must be levels of generality and "discretization points" already developed in neocortex, in order the hippocampus-style memory to start working.
Just published a translation of some sections and some new speculations on the decline of neuroplasticity in relation to the hippocampal-style memory (too long to put it here): http://artificial-mi nd.blogspot.com/2010 /06/teenage-theory-o f-mind-and-universe. html
...
Regarding your whole comment - indeed I think using FIFOs (either simple and priority queues), fast caches and levels of memory might be universal rules-of-thumb from Engineering.
>I understand, you're talking about extended storage within each level of
>detail | generalization, in case memory origin's location becomes relevant
>in the future.
Nice. :) I haven't thought with these precise terms though (origin's location, I remember you mentioned once in a recent posting about hippocampus in my blog). if you wish and have patience, check out the translation of my old speculations and some new.
Regarding the prize and buffers, longer look-back buffers for recent inputs (seems at any levels for any purpose) also for locations (context) would be an advantage for faster pattern discovery allowing track back if needed, very useful for patterns extended in time ("delayed"), with which my speculations in the knol began.
Anyway, memory was a milestones in my old writings, a starting point, such as a theoretical induction of neocortex and hippocampus (-like modules/effects) based on the evidence from consciousness as biographical memory and the fact that mind does learn and perform increasingly better before & without having such memories in the early age.
I assumed hippocampus-style-me
EOS effects are more abstract than of "Executive OS" (EXOS, neocortex), because EOS is an add-on to neocortex, I suspect there must be levels of generality and "discretization points" already developed in neocortex, in order the hippocampus-style memory to start working.
Just published a translation of some sections and some new speculations on the decline of neuroplasticity in relation to the hippocampal-style memory (too long to put it here): http://artificial-mi
...
Regarding your whole comment - indeed I think using FIFOs (either simple and priority queues), fast caches and levels of memory might be universal rules-of-thumb from Engineering.
DeleteBlock this userReport abusive comment
>
Regarding the prize and buffers, longer look-back buffers for recent inputs
(seems at any levels for any purpose) also for locations (context) would be an
advantage for faster pattern discovery allowing track back if needed, very
useful for patterns extended in time ("delayed"), with which my speculations in
the knol began.
Yes, I said I was wrong to dismiss it… Oh, you want me to change my reply there? Will do, as soon as I edit the knol itself, it doesn't address that point at all.
> I assumed hippocampus-style-me mory,
or "Events Operating System/Memory" (EOS) is somewhat higher level in mind than
"Executive Operating System/Memory" (EXOS, neocortex and patterns there). EOS
effects are more abstract than of "Executive OS" (EXOS, neocortex), because EOS
is an add-on to neocortex, I suspect there must be levels of generality and
"discretization points" already developed in neocortex, in order the
hippocampus-style memory to start working.
This is backwards. Buffering is not higher than anything, there’s no abstraction going on, or any processing for that matter, just simple copying. It’s not higher in scope either, the macro-structure is hierarchy, sequence is a structure within its levels.
As for hippocampus, it’s not an add-on, the neocortex is (Hawkins is dead wrong here). Hippocampus, otherwise known as archecortex, is a primitive 3-layer structure, part of “reptilian brain”. Neocortex developed later, both in phylogeny & in ontogeny. The fact that hippocampus is necessary to form declarative memories is an evolutionary bug (brain is full of those). Ideally, the neocortex + thalamus, maybe striatum, should be doing all the work, the rest of the brain can go extinct.
Look, neuroscience at its current state is just a vague inspiration for understanding intelligence, you need to keep it separate from theoretical work.
> Just published a translation of some sections and some new speculations on the decline of neuroplasticity in relation to the hippocampal-style memory (too long to put it here): http://artificial-mi nd.blogspot.com/2010 /06/teenage-theory-o f-mind-and-universe. html
OK, not bad for a teenager, but can we please get on? For example, stop abusing "computerese" terms & acronyms that really just obscure the subject (for yourself). The neuroplasticity stuff sounds random to me.
If you have any ideas you want to discuss *now*, great, but you need to formalize them. Otherwise, your signal-to-noise ratio is too low to bother. Seriously, you *talk* about compression, why not try to practice it?
> Regarding your whole comment - indeed I think using FIFOs (either simple and priority queues), fast caches and levels of memory might be universal rules-of-thumb from Engineering.
You don’t need to know any engineering to understand these things. Yes, there’re plenty of useful ideas in engineering. But, for a strictly incremental approach, selecting the right ones is harder than deducing them from the first principles (kind of like picking good ideas from your writing). The economics change as you get into advanced math & engineering, but these should not be necessary for a basic learning algorithm. At least not at the stage I am working on now.
Yes, I said I was wrong to dismiss it… Oh, you want me to change my reply there? Will do, as soon as I edit the knol itself, it doesn't address that point at all.
> I assumed hippocampus-style-me
This is backwards. Buffering is not higher than anything, there’s no abstraction going on, or any processing for that matter, just simple copying. It’s not higher in scope either, the macro-structure is hierarchy, sequence is a structure within its levels.
As for hippocampus, it’s not an add-on, the neocortex is (Hawkins is dead wrong here). Hippocampus, otherwise known as archecortex, is a primitive 3-layer structure, part of “reptilian brain”. Neocortex developed later, both in phylogeny & in ontogeny. The fact that hippocampus is necessary to form declarative memories is an evolutionary bug (brain is full of those). Ideally, the neocortex + thalamus, maybe striatum, should be doing all the work, the rest of the brain can go extinct.
Look, neuroscience at its current state is just a vague inspiration for understanding intelligence, you need to keep it separate from theoretical work.
> Just published a translation of some sections and some new speculations on the decline of neuroplasticity in relation to the hippocampal-style memory (too long to put it here): http://artificial-mi
OK, not bad for a teenager, but can we please get on? For example, stop abusing "computerese" terms & acronyms that really just obscure the subject (for yourself). The neuroplasticity stuff sounds random to me.
If you have any ideas you want to discuss *now*, great, but you need to formalize them. Otherwise, your signal-to-noise ratio is too low to bother. Seriously, you *talk* about compression, why not try to practice it?
> Regarding your whole comment - indeed I think using FIFOs (either simple and priority queues), fast caches and levels of memory might be universal rules-of-thumb from Engineering.
You don’t need to know any engineering to understand these things. Yes, there’re plenty of useful ideas in engineering. But, for a strictly incremental approach, selecting the right ones is harder than deducing them from the first principles (kind of like picking good ideas from your writing). The economics change as you get into advanced math & engineering, but these should not be necessary for a basic learning algorithm. At least not at the stage I am working on now.
EditDeleteReport abusive comment
>Oh,
you want me to change my reply there? Will do, as soon as I edit the knol
itself, it doesn't address that point at all.
I don't mind, you're the host (but yes, it would be useful for other readers), I was rather thinking aloud/recalling that stuff/searching for connections...
>This is backwards. Buffering is not higher than anything, there’s no abstraction going on, or any processing for that matter, just simple copying. It’s not higher in scope either, the macro-structure is hierarchy, sequence is a structure within its levels.
Buffering - OK, but I guess episodic memory may need discretization points and patterns (compression) before working. Yes, mind can get quick this to some extent.
>As for hippocampus, it’s not an add-on, the neocortex is (Hawkins is dead wrong here). Hippocampus, otherwise known as archecortex, is a primitive 3- layer structure, part of “reptilian brain”. Neocortex developed later, both in phylogeny & in ontogeny.
OK... Hmm... So do you suggest that archecortex/hippocam pus function in lower species and what lasted in humans
might be literal copying of perceptions - saving locations/mapping the
environment/in order the animal to find its lair and remember where there was
food. (This makes sense to me.)
I suspect that functionally archecortex might be a recorder/associative memory, but lacks generalization/compr ession/prediction capabilities, or may have some
compression but as a by-effect like using low resolution/precision . BTW
I've been studying about reptilian brain, checked a little about amphibian's
(it's 3-layers as well) - amphibians seem much closer to reptiles than reptiles
to mammals, a smaller evolution step, supposing easier to grasp something
meaningful about it. Indeed, isn't archecortex even amphibias ancestory
(archipallium), at least what I've read is reptiles have also a neopallium,
supposed to had evolved into neocortex.
>The fact that hippocampus is necessary to form declarative memories is an evolutionary bug
OK, interesting. :)
> evolutionary bug (brain is full of those). Ideally, the neocortex + thalamus, maybe striatum, should be doing all the work, the rest of the brain can go extinct.
I also think genes are messy and brain design is "spaghetty code". Brain has been patched over and over and early design decisions had been dragged all the time, because it was not possible otherwise using this technology. The higher layers had to be adapted to use the lower ones and I think bugs are very likely to come when there are functional overlaps between a new module and an old module. Such ones can be found easily, depending how deep the system is analyzed. I guess this can be the case with hippocampus and neocortex, because neocortex is also recording/copying perceptions.
Sorry to mention software engineering, but I guess this bugs-issue might be related to so called "coupling": http://en.wikipedia. org/wiki/Coupling_
(computer_science)
>Look, neuroscience at its current state is just a vague inspiration for understanding intelligence, you need to keep it separate from theoretical work.
OK...
>but can we please get on? For example, stop abusing "computerese" terms & acronyms that really just obscure the subject (for yourself). The neuroplasticity stuff sounds random to me. If you have any ideas you want to discuss *now*, great, but you need to formalize them. Otherwise, your signal -to-noise ratio is too low to bother. Seriously, you *talk* about compression, why not try to practice it?
I understand, I will get on, these terms were very old; I try to compress sometimes, but still often couldn't afford long-enough sustained concentration, distracted with other things waiting in the pipelines to be done...
>You don’t need to know any engineering to understand these things.
That's right about understanding, and other good concepts are also simple: pipeline (FIFO-related), branch prediction (prediction), out-of-order execution, superscalarity & parallelism in general. Practicing engineering helps keeping in mind they might be useful.
I guess that may go also for general software engineering guidelines, design patterns etc.
>Yes, there’re plenty of useful ideas in engineering. But, for a strictly incremental approach, selecting the right ones is harder than deducing them from the first principles (kind of like picking good ideas from your writing). The economics change as you get into advanced math & engineering, but these should not be necessary for a basic learning algorithm. At least not at the stage I am working on now.
OK
I don't mind, you're the host (but yes, it would be useful for other readers), I was rather thinking aloud/recalling that stuff/searching for connections...
>This is backwards. Buffering is not higher than anything, there’s no abstraction going on, or any processing for that matter, just simple copying. It’s not higher in scope either, the macro-structure is hierarchy, sequence is a structure within its levels.
Buffering - OK, but I guess episodic memory may need discretization points and patterns (compression) before working. Yes, mind can get quick this to some extent.
>As for hippocampus, it’s not an add-on, the neocortex is (Hawkins is dead wrong here). Hippocampus, otherwise known as archecortex, is a primitive 3- layer structure, part of “reptilian brain”. Neocortex developed later, both in phylogeny & in ontogeny.
OK... Hmm... So do you suggest that archecortex/hippocam
I suspect that functionally archecortex might be a recorder/associative memory, but lacks generalization/compr
>The fact that hippocampus is necessary to form declarative memories is an evolutionary bug
OK, interesting. :)
> evolutionary bug (brain is full of those). Ideally, the neocortex + thalamus, maybe striatum, should be doing all the work, the rest of the brain can go extinct.
I also think genes are messy and brain design is "spaghetty code". Brain has been patched over and over and early design decisions had been dragged all the time, because it was not possible otherwise using this technology. The higher layers had to be adapted to use the lower ones and I think bugs are very likely to come when there are functional overlaps between a new module and an old module. Such ones can be found easily, depending how deep the system is analyzed. I guess this can be the case with hippocampus and neocortex, because neocortex is also recording/copying perceptions.
Sorry to mention software engineering, but I guess this bugs-issue might be related to so called "coupling": http://en.wikipedia.
>Look, neuroscience at its current state is just a vague inspiration for understanding intelligence, you need to keep it separate from theoretical work.
OK...
>but can we please get on? For example, stop abusing "computerese" terms & acronyms that really just obscure the subject (for yourself). The neuroplasticity stuff sounds random to me. If you have any ideas you want to discuss *now*, great, but you need to formalize them. Otherwise, your signal -to-noise ratio is too low to bother. Seriously, you *talk* about compression, why not try to practice it?
I understand, I will get on, these terms were very old; I try to compress sometimes, but still often couldn't afford long-enough sustained concentration, distracted with other things waiting in the pipelines to be done...
>You don’t need to know any engineering to understand these things.
That's right about understanding, and other good concepts are also simple: pipeline (FIFO-related), branch prediction (prediction), out-of-order execution, superscalarity & parallelism in general. Practicing engineering helps keeping in mind they might be useful.
I guess that may go also for general software engineering guidelines, design patterns etc.
>Yes, there’re plenty of useful ideas in engineering. But, for a strictly incremental approach, selecting the right ones is harder than deducing them from the first principles (kind of like picking good ideas from your writing). The economics change as you get into advanced math & engineering, but these should not be necessary for a basic learning algorithm. At least not at the stage I am working on now.
OK
DeleteBlock this userReport abusive comment
> I
don't mind, you're the host (but yes, it would be useful for other readers),
What, both of them? Done.
> Buffering - OK, but I guess episodic memory may need discretization points and patterns (compression) before working. Yes, mind can get quick this to some extent.
Discretization, yes, that’s multi-stage buffering. Compression: only symmetrical, non-selective transforms. Selection = elevation, this is already hierarchical processing, buffering is within levels. Any discontinuous comparison (pattern discovery) generates redundant representations, thus requires selection to be compressive.
Also, buffering is more useful for spatial focus shifts, which are reversible, than for purely temporal “obsolescence”, which is not. Of course, reversal can be over derived, as well as original, coordinates.
> OK... Hmm... So do you suggest that archicortex/hippocam pus function in
lower species and what lasted in humans might be literal copying of perceptions
- saving locations/mapping the environment/in order the animal to find its lair
and remember where there was food. (This makes sense to me.)
I was talking about buffering in conceptual terms, hippocampus probably does bunch of other things too.
> BTW I've been studying about reptilian brain, checked a little about amphibian's (it's 3-layers as well), amphibias seem much closer to reptiles than reptiles to mammals, a smaller evolution step, supposing easier to grasp something meaningful about it.
Isn't archicortex amphibias ancestory (archipallium), at least what I've read is reptiles have also a neopallium, supposed to had evolved into neocortex.
Perhaps, don't know much about it.
> The higher layers had to be adapted to use the lower ones and I think bugs are very likely to come when there are functional overlaps between a new module and an old module. Such ones can be found easily, depending how deep the system is analyzed. I guess this can be the case with hippocampus and neocortex, because neocortex is also recording/copying perceptions.
Not also, almost all memory (sequential & hierarchical) is in neocortex. But we didn't evolve as free thinkers, in evolutionary context "important" information is about things that are "close" to you. I don't think hippocampus holds or transfers much memory, but it associates memories with locations, & strengthens the ones that are | will be "close". I am sure neocortex is perfectly capable of representing maps (as in temporal lobe), but hippocampus already did that, & was left at it. So, neocortex evolved to depend on hippocampus to tell it what's important enough to be conscious of (declarative).
> That's right about understanding, and other good concepts are also simple: pipeline (FIFO-related), branch prediction (prediction), out-of-order execution, superscalarity & parallelism in general. Practicing engineering helps keeping in mind they might be useful.
Right, but it also gives you a "man with a hammer" syndrome. Thinking in terms of engineering about the problem is one thing, actually training / working as an engineer on unrelated projects creates biases you're not even aware of. And all possibly practical projects are utterly *unrelated*.
What, both of them? Done.
> Buffering - OK, but I guess episodic memory may need discretization points and patterns (compression) before working. Yes, mind can get quick this to some extent.
Discretization, yes, that’s multi-stage buffering. Compression: only symmetrical, non-selective transforms. Selection = elevation, this is already hierarchical processing, buffering is within levels. Any discontinuous comparison (pattern discovery) generates redundant representations, thus requires selection to be compressive.
Also, buffering is more useful for spatial focus shifts, which are reversible, than for purely temporal “obsolescence”, which is not. Of course, reversal can be over derived, as well as original, coordinates.
> OK... Hmm... So do you suggest that archicortex/hippocam
I was talking about buffering in conceptual terms, hippocampus probably does bunch of other things too.
> BTW I've been studying about reptilian brain, checked a little about amphibian's (it's 3-layers as well), amphibias seem much closer to reptiles than reptiles to mammals, a smaller evolution step, supposing easier to grasp something meaningful about it.
Isn't archicortex amphibias ancestory (archipallium), at least what I've read is reptiles have also a neopallium, supposed to had evolved into neocortex.
Perhaps, don't know much about it.
> The higher layers had to be adapted to use the lower ones and I think bugs are very likely to come when there are functional overlaps between a new module and an old module. Such ones can be found easily, depending how deep the system is analyzed. I guess this can be the case with hippocampus and neocortex, because neocortex is also recording/copying perceptions.
Not also, almost all memory (sequential & hierarchical) is in neocortex. But we didn't evolve as free thinkers, in evolutionary context "important" information is about things that are "close" to you. I don't think hippocampus holds or transfers much memory, but it associates memories with locations, & strengthens the ones that are | will be "close". I am sure neocortex is perfectly capable of representing maps (as in temporal lobe), but hippocampus already did that, & was left at it. So, neocortex evolved to depend on hippocampus to tell it what's important enough to be conscious of (declarative).
> That's right about understanding, and other good concepts are also simple: pipeline (FIFO-related), branch prediction (prediction), out-of-order execution, superscalarity & parallelism in general. Practicing engineering helps keeping in mind they might be useful.
Right, but it also gives you a "man with a hammer" syndrome. Thinking in terms of engineering about the problem is one thing, actually training / working as an engineer on unrelated projects creates biases you're not even aware of. And all possibly practical projects are utterly *unrelated*.
EditDeleteReport abusive comment
Sorry if
you don't really care about this neuro stuff, I put it for completeness about
paliums, because archicortex even seems to be fish brain...
fish --> archipalium --> hippocampus
amphibia --> paleopalium --> cingular cortex and other limbic cortex parts
reptiles --> neopalium --> neocortex
http://wiki.cns.org/ wiki/index.php/Paleo pallium/archipallium
http://en.wikipedia. org/wiki/Limbic_cort ex
T>> I don't mind, you're the host (but yes, it would be useful for other readers),
B>What, both of them? Done.
Cool, I may include it in my resume! :P
B>Also, buffering is only useful for spatial focus shifts, which are reversible, not for temporal “obsolescence”, which is not.
Maybe I don't understand your point correctly, but I guess buffering of any irreversible sequences would be advantageous, namely because it would be impossible to go back and re-input them by the sensors.
B>Not also, almost all memory (sequential & hierarchical) is in neocortex.
OK, I was emphasizing on the supposed functional overlap - quality rather than quantity...
T>> That's right about understanding, and other good concepts are also simple: pipeline (FIFO-related), branch prediction (prediction), out-of-order execution, superscalarity & parallelism in general. Practicing engineering helps keeping in mind they might be useful.
B>Right, but it also gives you a "man with a hammer" syndrome. Thinking in terms of engineering about the problem is one thing, actually training / working as an engineer on unrelated projects creates biases you're not even aware of. And all possibly practical projects are utterly *unrelated*.
I don't mean that these ideas solve themselves AGI problem, they are general optimization/impleme ntation suggestions that might speed
up and would be useful for "the solution". Computer engineering is mostly about
providing raw speed (and I think so far it is incremental there), and
optimizations of the implementation might be important in the very beginning of
AGI.
Also, I like "big engineering" - design reaching to inventive ideas, architectural innovations, understanding leading to leaps - like IBM "Stretch" or some of Cray's computers. It's like science and art. However, to have a chance to practice professionally that kind of engineering you depend a lot on social status, which is usually gained with many years of activities, most of which consisting of predictable and boring, not creative work, solving problems that just need time to implement and debug.
Actually I do agree that you shouldn't practice that kind of engineering for too long, not to spend decades or a career there. You can grasp the important ideas and think of architectures with much less efforts.
fish --> archipalium --> hippocampus
amphibia --> paleopalium --> cingular cortex and other limbic cortex parts
reptiles --> neopalium --> neocortex
http://wiki.cns.org/
http://en.wikipedia.
T>> I don't mind, you're the host (but yes, it would be useful for other readers),
B>What, both of them? Done.
Cool, I may include it in my resume! :P
B>Also, buffering is only useful for spatial focus shifts, which are reversible, not for temporal “obsolescence”, which is not.
Maybe I don't understand your point correctly, but I guess buffering of any irreversible sequences would be advantageous, namely because it would be impossible to go back and re-input them by the sensors.
B>Not also, almost all memory (sequential & hierarchical) is in neocortex.
OK, I was emphasizing on the supposed functional overlap - quality rather than quantity...
T>> That's right about understanding, and other good concepts are also simple: pipeline (FIFO-related), branch prediction (prediction), out-of-order execution, superscalarity & parallelism in general. Practicing engineering helps keeping in mind they might be useful.
B>Right, but it also gives you a "man with a hammer" syndrome. Thinking in terms of engineering about the problem is one thing, actually training / working as an engineer on unrelated projects creates biases you're not even aware of. And all possibly practical projects are utterly *unrelated*.
I don't mean that these ideas solve themselves AGI problem, they are general optimization/impleme
Also, I like "big engineering" - design reaching to inventive ideas, architectural innovations, understanding leading to leaps - like IBM "Stretch" or some of Cray's computers. It's like science and art. However, to have a chance to practice professionally that kind of engineering you depend a lot on social status, which is usually gained with many years of activities, most of which consisting of predictable and boring, not creative work, solving problems that just need time to implement and debug.
Actually I do agree that you shouldn't practice that kind of engineering for too long, not to spend decades or a career there. You can grasp the important ideas and think of architectures with much less efforts.
DeleteBlock this userReport abusive comment
>
Sorry if you don't really care about this neuro stuff,
It's fun, but doesn't seem to be terribly relevant. I prefer to think in terms of function, biological analogues are too macro, functionally mixed, & not well understood. That goes for “engineering” discussion too :).
B>Also, buffering is only useful for spatial focus shifts, which are reversible, not for temporal “obsolescence”, which is not.
> Maybe I don't understand your point correctly, but I guess buffering of any irreversible sequences would be advantageous, namely because it would be impossible to go back and re-input them by the sensors.
You're thinking in terms of costs, not benefits. Experience has no intrinsic value, the purpose is to predict, & you don’t predict the past. The data in a buffer is addressable by its coordinates, you retrieve it if:
a) the location of expected inputs matches that in a buffer again, in case of spatial shifts,
b) the pattern in new inputs is stronger than average, which means it should search further than expected, both forward & backward (in the buffer).
The second reason is equally valid for both spatial & temporal shifts, which is why I’ve corrected my previous reply before you answered it: “buffering is *more* useful for spatial focus shifts”. Sorry to keep changing it on you :).
So, you’re right, buffering for the second reason would be more important in irreversible shifts. But I think potential proximity is a lot more important reason to buffer data, - space is multi-dimensional & prediction is far more affected by external impacts than by past trajectory of the pattern.
It's fun, but doesn't seem to be terribly relevant. I prefer to think in terms of function, biological analogues are too macro, functionally mixed, & not well understood. That goes for “engineering” discussion too :).
B>Also, buffering is only useful for spatial focus shifts, which are reversible, not for temporal “obsolescence”, which is not.
> Maybe I don't understand your point correctly, but I guess buffering of any irreversible sequences would be advantageous, namely because it would be impossible to go back and re-input them by the sensors.
You're thinking in terms of costs, not benefits. Experience has no intrinsic value, the purpose is to predict, & you don’t predict the past. The data in a buffer is addressable by its coordinates, you retrieve it if:
a) the location of expected inputs matches that in a buffer again, in case of spatial shifts,
b) the pattern in new inputs is stronger than average, which means it should search further than expected, both forward & backward (in the buffer).
The second reason is equally valid for both spatial & temporal shifts, which is why I’ve corrected my previous reply before you answered it: “buffering is *more* useful for spatial focus shifts”. Sorry to keep changing it on you :).
So, you’re right, buffering for the second reason would be more important in irreversible shifts. But I think potential proximity is a lot more important reason to buffer data, - space is multi-dimensional & prediction is far more affected by external impacts than by past trajectory of the pattern.
EditDeleteReport abusive comment
Sorry
this comment came up a bit long (split in two), but I needed to include the
explanations.
Edited: I shortened it, too long a comment, I'll post details in my own place.
T>> Sorry if you don't really care about this neuro stuff,
B>It's fun, but doesn't seem to be terribly relevant. I prefer to think in terms of function, biological analogues are too macro, functionally mixed, & not well understood. That goes for “engineering” discussion too :).
I agree functional is a cleaner way, but I'll share a bit more, it's "neuro- functional-behaviora l-evolutionary" and it's related
to the other thread.
I suspect that minicolumns and neocortex structure and functionality could be reproduced by a series of simple transformations from simpler structures, such as replication, extension of range of connections, variation etc.. Somewhat neocortical functions were implied in fish, amphibian and reptile brains.
(...)
B> You're thinking in terms of costs, not benefits. Experience has no intrinsic value, the purpose is to predict
It hasn't, but the more complex/higher resolution and faster-than-"full"- evaluation-in-real-t ime the environment, the more
having exact records might be important for delayed evaluation, because it gets
impossible to decide on the spot would the information be predictive in the
future.
...Continues...
Edited: I shortened it, too long a comment, I'll post details in my own place.
T>> Sorry if you don't really care about this neuro stuff,
B>It's fun, but doesn't seem to be terribly relevant. I prefer to think in terms of function, biological analogues are too macro, functionally mixed, & not well understood. That goes for “engineering” discussion too :).
I agree functional is a cleaner way, but I'll share a bit more, it's "neuro- functional-behaviora
I suspect that minicolumns and neocortex structure and functionality could be reproduced by a series of simple transformations from simpler structures, such as replication, extension of range of connections, variation etc.. Somewhat neocortical functions were implied in fish, amphibian and reptile brains.
(...)
B> You're thinking in terms of costs, not benefits. Experience has no intrinsic value, the purpose is to predict
It hasn't, but the more complex/higher resolution and faster-than-"full"- evaluation-in-real-t
...Continues...
DeleteBlock this userReport abusive comment
...Continues from the previous one...
Edit: Shortened, more detailed explanations would be posted in my blog or so.
>& you don’t predict the past.
Depends on what you mean with the past, sometimes you *do predict* the past: recalling older past and searching for the reasons of how it got to the younger past, because you missed some details or didn't understood them then.
(...)
B> But I think potential proximity is a lot more important reason to buffer data, - space is multi-dimensional & prediction is far more affected by external impacts than by past trajectory of the pattern.
Maybe, but I suspect this might be over generalized, I guess a good mind should be able to adapt, depending on experience and available resources. If mind has to decide, it should try to predict what might be more important and would it be useful to buffer what.
Edit: Shortened, more detailed explanations would be posted in my blog or so.
>& you don’t predict the past.
Depends on what you mean with the past, sometimes you *do predict* the past: recalling older past and searching for the reasons of how it got to the younger past, because you missed some details or didn't understood them then.
(...)
B> But I think potential proximity is a lot more important reason to buffer data, - space is multi-dimensional & prediction is far more affected by external impacts than by past trajectory of the pattern.
Maybe, but I suspect this might be over generalized, I guess a good mind should be able to adapt, depending on experience and available resources. If mind has to decide, it should try to predict what might be more important and would it be useful to buffer what.
Some comments, pardon my typos
Boris,
I am no expert but I am a Ph.D. student in Cognitive Psychology with a specialization in Neuroscience. I am a little lost reading your post, in part due to a mismatch in your terminology with the fields of neuroscience and cognitive neuroscience. I think your ideas as I understand them are interesting but I would like to make a few corrections comments.
In response to, "A functional unit of neocortex is a minicolumn, which seems to perform recognition / generalization function.” Microcolumns (or minicolumns) neither perform "recognition" or "generalization" but are involved in sensory processing such as vision, audition, smell etc. Your ability to recognize something as an object or your ability to make conceptual generalizations are high level cortical functions. Microcolumns are organized anatomical structures that process particular features. For example in V1 or primary visual cortex, a specific neuron termed simple cells respond preferentially to specific features such as line orientation. These cells distinguish between different lines orientations (such as / | \ ) by changing neural firing rate. These cells will respond strongly to a preferred orientation but may fire to a lesser extent to other orientations—the greater the difference in orientation between the stimulus and the preferred orientation, the less the cell will fire.
Recognition, a memory function, and generalization, the ability to transfer learning to a novel or related situation, are distinct abilities and brain processes.
I would caution you to generalize, no pun intended, work regarding autism or other patient populations to make claim about individual differences in normal human cognition. Non-autistic and autistic individuals can both do feature processing, can perceive objects by integrating features and can remember those objects. Some but not all autistic individuals have difficulty with conceptual information. Autistic individual are feature focused though they do have some interesting high level perceptual deficits with objects and faces. I recommend the following paper:
Gastgeb, H.Z., Strauss, M.S. & Minshew, N.J. (2006). Do individuals with autism process categories differently? The effect of typicality and development. Child Development, 77(6), 1717-1729.
The concept of IQ and intelligence is an extremely dicey subject. I recommend the following “Tall Tales about the Mind and Brain: Separating Fact from Fiction” by Sergio Della Sala – the chapters are written by highly regarded (cognitive) neuroscientists.
In closing I want to say that cognitive neuroscience is a long ways off from addressing the type of questions that interest you—we simply aren’t there yet—(see Bruer’s paper Education and the Brain: A Bridge Too Far in the journal Educational Researcher, v26 n8 p4-16 Nov 1997). There is some information but the field of Cognitive Psychology can more thoroughly address your ideas. But I do recommend the following papers.
Morrision, Krawczyk, Holyoak, Hummel, Chow, Miller, Knowlton (2004). A Neurocomputational Model of Analogical Reasoning and its breakdown in Frontotemporal Lobar Degeneration. Journal of Cognitive Neuroscience 16(2), 260-271.
Waltz, Knowlton, Holyoak, Boone, Miskin, de Mendez Santos, Thomas, Miller. (1999). A system for relational reasoning in human prefrontal cortex. Psychological Science, 10(2), 119-125.
And finally some shameless self-promotion, my chapter on the brain and expertise may be of interest to you. The entire book may be of interest to you, my chapter is the only one that involves neuroscience, the rest deals with the expertise as studied by cognitive psychology.
Hill, N.M. & Schneider, W. (2006). Brain changes in the Development of Expertise: Neuroanatomical and Neurophysiological Evidence about Skill-based Adaptations. In K. A. Ericsson, N. Charness, P. Feltovich, and R. Hoffman (Eds.), Cambridge Handbook of Expertise and Expert Performance. New York: Cambridge University Press.
Nicole M Hill
I am no expert but I am a Ph.D. student in Cognitive Psychology with a specialization in Neuroscience. I am a little lost reading your post, in part due to a mismatch in your terminology with the fields of neuroscience and cognitive neuroscience. I think your ideas as I understand them are interesting but I would like to make a few corrections comments.
In response to, "A functional unit of neocortex is a minicolumn, which seems to perform recognition / generalization function.” Microcolumns (or minicolumns) neither perform "recognition" or "generalization" but are involved in sensory processing such as vision, audition, smell etc. Your ability to recognize something as an object or your ability to make conceptual generalizations are high level cortical functions. Microcolumns are organized anatomical structures that process particular features. For example in V1 or primary visual cortex, a specific neuron termed simple cells respond preferentially to specific features such as line orientation. These cells distinguish between different lines orientations (such as / | \ ) by changing neural firing rate. These cells will respond strongly to a preferred orientation but may fire to a lesser extent to other orientations—the greater the difference in orientation between the stimulus and the preferred orientation, the less the cell will fire.
Recognition, a memory function, and generalization, the ability to transfer learning to a novel or related situation, are distinct abilities and brain processes.
I would caution you to generalize, no pun intended, work regarding autism or other patient populations to make claim about individual differences in normal human cognition. Non-autistic and autistic individuals can both do feature processing, can perceive objects by integrating features and can remember those objects. Some but not all autistic individuals have difficulty with conceptual information. Autistic individual are feature focused though they do have some interesting high level perceptual deficits with objects and faces. I recommend the following paper:
Gastgeb, H.Z., Strauss, M.S. & Minshew, N.J. (2006). Do individuals with autism process categories differently? The effect of typicality and development. Child Development, 77(6), 1717-1729.
The concept of IQ and intelligence is an extremely dicey subject. I recommend the following “Tall Tales about the Mind and Brain: Separating Fact from Fiction” by Sergio Della Sala – the chapters are written by highly regarded (cognitive) neuroscientists.
In closing I want to say that cognitive neuroscience is a long ways off from addressing the type of questions that interest you—we simply aren’t there yet—(see Bruer’s paper Education and the Brain: A Bridge Too Far in the journal Educational Researcher, v26 n8 p4-16 Nov 1997). There is some information but the field of Cognitive Psychology can more thoroughly address your ideas. But I do recommend the following papers.
Morrision, Krawczyk, Holyoak, Hummel, Chow, Miller, Knowlton (2004). A Neurocomputational Model of Analogical Reasoning and its breakdown in Frontotemporal Lobar Degeneration. Journal of Cognitive Neuroscience 16(2), 260-271.
Waltz, Knowlton, Holyoak, Boone, Miskin, de Mendez Santos, Thomas, Miller. (1999). A system for relational reasoning in human prefrontal cortex. Psychological Science, 10(2), 119-125.
And finally some shameless self-promotion, my chapter on the brain and expertise may be of interest to you. The entire book may be of interest to you, my chapter is the only one that involves neuroscience, the rest deals with the expertise as studied by cognitive psychology.
Hill, N.M. & Schneider, W. (2006). Brain changes in the Development of Expertise: Neuroanatomical and Neurophysiological Evidence about Skill-based Adaptations. In K. A. Ericsson, N. Charness, P. Feltovich, and R. Hoffman (Eds.), Cambridge Handbook of Expertise and Expert Performance. New York: Cambridge University Press.
Nicole M Hill
Last edited Jul 3, 2010 10:21
PM
DeleteBlock this userReport abusive comment
Thanks
for the comments & references Nicole!
The mismatch in terminology is indeed formidable, & reflects corresponding mismatch in our conceptual frameworks.
First of all, recognition/generali zation are distinct
high-level abilities only if you define them as such for some high-level
cognitive test. Unless so specified, recognition/generali zation is
simply a discovery of common elements among multiple inputs. Algorithmically,
it's an iterative comparison (which discovers match) & projection (that
determines which inputs are compared): a step producing incrementally more
general patterns/concepts. This step is not specific to any level of complexity.
Generalization starts from sensory processing, such as line angle recognition in
your V1 example, & continues into Association Cortices. To say that
minicolumns do not perform generalization is a bit absurd. Neocortex consists of
little but minicolumns (see "Cortex & Mind", p.26) and every cognitive
function can be reduced to generalization.
Thanks for the pointer to Gazzaniga's article, I will mention it in the knol. "The evolutionary perspective" chapter there indirectly supports my premise: hemispherical asymmetry can be summarized as a relatively higher-generality bias of the left hemisphere. This seems to be a distinctly human feature, producing hugely greater overall generalization ability compared to our nearest relatives. The hemispheres do not normally operate independently, they are densely interconnected by CC. Some of this connectivity is to provide simple fault-tolerance & sensory-motor field integration, as in animals. But because of the asymmetry ("lateralization") in humans, the transfer of data between hemispheres will likely be between different levels of generality. This mismatch will add another step of generalization to the hierarchy of the left hemisphere.
I couldn't find your chapter online(?), but you seem to work with MRI, which too high a level for me. I think the most interesting part is processing within a minicolumn, at the most a macrocolumn.
Cognitive Psychology is also too high-level for me, I am into the most basic mechanisms of cognition. Neuroscience can be quite suggestive, given a meaningful theory. My ideas here are difficult to understand out of the context of my "Intelligence" knol: http://knol.google.c om/k/boris-kazachenk o/intelligence/27zxw 65mxxlt7/2#
though it's a lot more abstract.
Appreciate you interest and the references, though it may take me a while to get to them, as this is not my main focus.
Boris.
The mismatch in terminology is indeed formidable, & reflects corresponding mismatch in our conceptual frameworks.
First of all, recognition/generali
Thanks for the pointer to Gazzaniga's article, I will mention it in the knol. "The evolutionary perspective" chapter there indirectly supports my premise: hemispherical asymmetry can be summarized as a relatively higher-generality bias of the left hemisphere. This seems to be a distinctly human feature, producing hugely greater overall generalization ability compared to our nearest relatives. The hemispheres do not normally operate independently, they are densely interconnected by CC. Some of this connectivity is to provide simple fault-tolerance & sensory-motor field integration, as in animals. But because of the asymmetry ("lateralization") in humans, the transfer of data between hemispheres will likely be between different levels of generality. This mismatch will add another step of generalization to the hierarchy of the left hemisphere.
I couldn't find your chapter online(?), but you seem to work with MRI, which too high a level for me. I think the most interesting part is processing within a minicolumn, at the most a macrocolumn.
Cognitive Psychology is also too high-level for me, I am into the most basic mechanisms of cognition. Neuroscience can be quite suggestive, given a meaningful theory. My ideas here are difficult to understand out of the context of my "Intelligence" knol: http://knol.google.c
Appreciate you interest and the references, though it may take me a while to get to them, as this is not my main focus.
Boris.
No comments:
Post a Comment