I've had some experience dealing with neural networks and probabilistic models, but I'm looking for a resource specifically regarding the practical use of artificial neural networks in data compression.
Any suggestions?
I've had some experience dealing with neural networks and probabilistic models, but I'm looking for a resource specifically regarding the practical use of artificial neural networks in data compression.
Any suggestions?
well... I think this will not help you so much... but anyway... try to look in universities sites; I always find the better materials about data compression on they!
Sorry for I don't to know a good answer =/
http://www.scholarpedia.org/article/Hopfield_network
this can be a good start point (but I'm not sure that Hopfield net is what you really need, I didn't use NN for data compression).
A Google search of this topic turned up nothing of any significance. This may just be a semantic problem, though, depending upon what exactly you mean by "data compression".
If, by this term, you mean lossless compression of any arbitrary type of data, then I'm not surprised that there's nothing out there about doing data compression with neural networks. Neural networks are more in the category of "fuzzy logic", in the sense that they (theoretically) yield similar outputs in the presence of similar (but not identical) inputs.
Neural networks could, perhaps, be used in situations where some loss of data was acceptable, like in audio or image compression. Here is a semi-interesting link on the subject of image compression with neural nets: http://www.comp.glam.ac.uk/digimaging/neural.htm
I'd say this area of research was pretty wide-open in general. However, this leads me into one of my favorite rants on the subject. Artificial neural networks were first developed decades ago, when the knowledge of how biological neurons function was much less than it is today. Even given the state of knowledge at the time, the artificial neural units they developed are light-years away from real neurons.
In neural network programming, neurons are basically conceived of as passive switches that weigh their inputs and then "decide" to either fire or not fire, which affects the input weights of the next layer of neurons, and so on, until the output layer is reached. Biological neurons really don't work like this; it's more accurate to say that real neurons have a sort of natural rate at which they fire action potentials, and the inputs from pre-synaptic neurons generally make a neuron fire at a faster or slower rate. I've never read an account of artificial neural networks that takes this phenomenon into consideration at all.
Here are some additional characteristics (off the top of my head) of biological neurons that I have never seen incorporated into artificial neural networks in any way, shape or form:
Axon length: the propagation of an action potential along a neuron's axon is actually a relatively slow process, so the length of a neuron's axon has a major effect on the timing of the overall structure.
Myelination: related to axon length, myelination speeds up the conduction of an action potential by at least an order of magnitude. In the human brain, not all neurons are myelinated, so this factor also has a significant effect on the timing of the structure.
Neurotransmitters: neurotransmitters are the chemicals released from the presynaptic neuron after an action potential, that float across the synapse and "lock" into receptors in the post synaptic neuron, either exciting or inhibiting (i.e. speeding up the firing rate or slowing it down) the receiving neuron. There are a number of different types of neurotransmitters in biological nervous systems, which is relevant because of:
Reuptake: the process by which neurotransmitter molecules are released from the post-synaptic neuron and reabsorbed by the pre-synaptic neuron. Reuptake rates are different for different neurotransmitters, and are further affected by some classes of drugs and hormones found in the brain.
Systemic factors: one well-known behavioral effect in nervous systems is the release of systemic factors like hormones into the bloodstream and into the brain. These chemicals are either actual neurotransmitters themselves (which, when released into the bloodstream, cause an effect as if each pre-synaptic neuron that uses that particular neurotransmitter just fired) or else are chemicals that inhibit the reuptake of particular neurotransmitters (which cause an effect as if each pre-synaptic neuron using that neurotransmitter that recently fired continues to fire away).
And much, much more. I first became interested in biological neurons and artificial neural networks almost 20 years ago, and I'm absolutely astonished at how little progress has been made in this area since then. The aspects of biological neurons I describe above are well known and easy to learn from Wikipedia or any introductory textbook on the subject, and yet these features are utterly and completely unrepresented in any artificial neural networks that I've ever seen or read about.
Sadly, I think this is a perfect example of how programmers tend to know nothing at all about any subjects other than computer programming. On the other hand, this might be a good thing: I don't want to live through a Terminator scenario.
Update: I forgot to mention the problem of aspects of artificial neural networks that aren't found in the real world at all. Back-propagation is the most conspicuous of these; there is absolutely nothing in real-world biological nervous systems that in any way, shape or form corresponds to this phenomenon.
Well, perhaps there are not many papers or resources directly talking about NN & compression together.
But the way I see it, if you're compressing some stream of data, there's at least a natural connection between being able to predict the next symbol and compression.
If your NN produces a vector of probabilities for what the next symbol is, you can then encode the actual symbol using arithmetic coding with those probabilities. Then the way to improve your compression ratios is simply to work on improving your NN's prediction accuracy.