One of the neuroscience topics that science fiction has taken and run away with is artificial intelligence (AI). AI is, by the way, a catch-all term for the creation of human-like cognition that can be created through our artifice, ideally also possessing self-awareness. Sorry for defining with a synonym, but it’s a pretty non-disingenuous topic. I would expand that definition to include, for functional purposes, all technology that seeks to replicate human neurological activity.
There are many potential benefits for such technology, not the least of which could be the ability to create robots that can flexibly perform menial labor, freeing up more man hours to complete more important tasks (as has been the historic trend leading to technological advancement). As we know from the media, like Isaac Asimov’s “I Robot” or “The Matrix,” there are potential problems with self-aware machines. I would argue, though, that there are other uses for mimicking neurons than to simply create smarter backhoes. Interfacing a prosthetic arm with our neural system will require significant understanding of how neurons work and interact, and perhaps one solution to degenerative diseases like Alzheimer’s might be replacement of damaged tissue with artificial equivalents.
For right now, I’m just going to discuss the replication of complete neural networks, and ignore for a moment the comparatively simpler applications, like interfacing a prosthetic. What seems to be the consensus for creating AI is producing a plastic supercomputer, essentially, that can adapt to learn new things and retain memories in an analogous way to our brains’ phenomenal problem-solving abilities. This has proven to be extremely difficult.
This is a topic that I’ve been thinking about for months as we (in our seminar) expand our understanding of the human neural system. The conclusion that I’ve come to is that our own system is nearly irreducibly complex, meaning that any AI system we make should probably not be strictly based on human cognition but rather the underlying principles.
At this conference, I have learned some new things that added to my understanding of how our brains worked that would make it a lot more difficult to use humans as a model. I think it’s a good time, then, to introduce the concept. I’m going to list a series of problems that makes our cognition hard to model with a computer, and explain what makes them damaging to that process.
1: We have no idea why we’re conscious
Sure, we can explain what’s different between our brains and animals’ and even narrow down the areas that might be responsible (mostly that our frontal cortex is organizationally more complex and also larger), but that critical spark still eludes our understanding.
2: We don’t know what the critical subunit of the brain is
Is it Circuits? Neurons? Synapses? LeDoux would argue, and I would agree, that synapses seem to be the best functionally irreducible information processing substrate in the brain, but it’s extremely difficult to understand how a single synapse affects a single neuron and from there a group of neurons, a circuit, a brain area, and then overall combines to form cognition. Furthermore, there are complicated rules governing which synapses fire following a neuron firing, which makes it so that it’s hard to say “it’s at the synapses” without the full set of rules
3: Neurons are linked to tons of other neurons
Computers use a simple on-off switch in order to transmit information at the simplest level. The brain, on the other hand, has an extraordinarily complicated system determining when a given neuron will fire and governing which synapses will be activated from an action potential. As one example of this complexity, there are 5,000 (average) synapses per neuron, each with their own excitatory and inhibitory processes that operate to different degrees on the post-synaptic neuron. We will need to understand an awful lot more about these interconnections before we can think to try to replicate them in circuits.
4: Neuronal firing is affected by more things than the neurons surrounding it
Hormones and other non-proximal global modulators are responsible for what we can only assume are important roles in the brain. Therefore, simply replicating a circuit will not allow for us to replicate the functionality of our brains; we may well need to allow for subtle influences from other areas of the brain.
5: Frequency of firing also affects synapses
Long and short term potentiation work by changing the characteristics of a neuron in response to its use in cognition, and seem to be important players in our memory systems. If you want to model the brain, you have to find a mechanical way to express this.
6: Some neurons fire by themselves
Pacemaker cells, for easy example, fire based on a predictable rhythm (that of course can be changed based on stimulation that affects them in different ways). These pacemaker cells are in the heart, and I learned today that they’re also involved in the respiration system.
7: We’re pretty clueless as to how it’s all set up
We understand that circuits exist in the brain, we have a decent idea about how early development works in the basic wiring processes, and we even have some insight into how pruning works later in life. We can describe these things, but it’s much more difficult to use these hypotheses to create a similarly functional model.
Based on all of these things, I’ve come to a few conclusions. We might not ever be able to model the brain in a computer completely. Neurons have too many working parts and have too many unpredictable effects on surrounding neurons. If we manage to really get a good handle on all of the ways in which neurons interact, the only way we could make it functional in a computer would be to use a ridiculous amount of processing power. It’s probably simpler to use the tools that our body gives us and focus on stem cell research and DNA modification to replace damaged tissue.
Man, I could write like 50 pages on this. But I won’t. At least, not now. I hope this was interesting! I felt like I was rambling.
It seems to me that while all the above is seemingly true, the converse directionality is equally if not more interesting (possibly because it’s more promising). That is, we can glean knowledge from the field of computer architecture (which has evolved continuously, arguably with a new generation every 18 months, for something on the order of half a century) into how complex computational systems organize.
Consider, e.g.: http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1000748
PS I just stumbled upon this from a random tweet hashtagged #sfn11. Awesome.
LikeLike
Henry, that’s an extremely interesting article! A really new perspective, especially considering that my take was fairly one sided and contextualized as an argument. I’m glad someone’s working through the theoretical underpinnings of what it would take to actually make a brain representation work.
PS Thanks for the kind words! Heh.
LikeLike