(Google translation with corrections)
Information can only be linked together where it is present at the same time. Digital gates in the PC or smartphone, for example, require that the signals to be linked are statically present at the inputs of a gate for a certain period of time. Computers therefore have a clock and a clock frequency. The time of a clock transition from high to low or vice versa defines the takeover of the signals into a module. But the clock signal has to be present on the huge integrated circuit (IC) everywhere exactly to nanoseconds at the same time.
While wires and circuits in computers transmit any signal at about a tenth the speed of light, nerve networks are around a million times slower. Clock signals run so slowly that they cannot be used for synchronization. And as if that wasn't horror enough, nervous signals are pulse-like. We speak of "spikes". And spikes crawl extremely slowly through our brain. So how are nerve networks synchronized?
If we try to bring two spikes coming from different directions together at an AND gate , they will rarely arrive at the same time: The gate output will remain silent. Digital circuits mostly don't work with spikes. Especially not when the wires are as slow as our nerves. Information can only be processed where it is present at the same time. Nerve networks don't seem to work like our computers. But how do they work?
To find out how nerve-networks might work, we imagine that each nerve branches out in many directions. If we now look at spikes that are wandering around in different nerves, then we are sure to find a place where they meet each other. Only a nerve cell located at this point can be excited!
In contrast to the computer, in which logic levels (0 or 1) determine the information content, the location of the simultaneity of the meeting of different spikes determines "the information" in the nerve network. Nerve networks cannot be understood with computer informatics. They are of different nature. We are dealing with a completely different computer science.
If you think about this thought for a long time, you will discover the principle behind it: Nerve networks can only work projectively (depicting, image-like), never static like a computer. That explains why it takes us years to learn the multiplication table. Or why memory artists have to invent a picture story in order to be able to memorize a few playing cards: We arrive at the "interference networks". The information content is like a photo in the sharpness of a projection. Waves become images and vice versa - optics and acoustics merge under the roof of the interference networks: "Seeing is Hearing" was written on the first acoustic camera in 1996.
It borders on a miracle that Karl Lashley, who became famous for his rat experiments ("In search of the engram"), came to postulate interferences in nerve networks as early as 1942. Unfortunately, I only discovered the following quote in Karl Pribram's estate after his death in January 2015. Karl had sent me various articles over the years.
Lashley (1942) had proposed that interference patterns among wave fronts in brain electrical activity could serve as the substrate of perception and memory as well. This suited my earlier intuitions, but Lashley and I had discussed this alternative repeatedly, without coming up with any idea what wave fronts would look like in the brain. Nor could we figure out how, if they were there, how they could account for anything at the behavioral level. These discussions taking place between 1946 and 1948 became somewhat uncomfortable in regard to Don Hebb's book (1948) that he was writing at the time we were all together in the Yerkes Laboratory for Primate Biology in Florida. Lashley didn't like Hebb's formulation but could not express his reasons for this opinion: "Hebb is correct in all his details but he's just oh so wrong" . (Karl Pribram in 'Brain and Mathematics', 1991)
"When one neuron repeatedly assists in firing another, the axon of the first cell develops synaptic knobs (or enlarges them, if they already exist) in contact with the soma of the second cell".
This approach quickly became the universal, but unsuitable basis for neuro-computing (NN, ANN) for modeling neural networks worldwide.
But why said Lashley: "Hebb is correct in all his details but he's just oh so wrong"? Because learning can only be done where the delay structure of the network ensures, that the information to be processed (pulses) arrive exactly at the same time. If a neuron does not receive the partial pulses of a source at the same time due to an unsuitable delay structure, it will neither learn nor do anything. Only a neuron whose delay structure enables code detection (chap. 10, p.210) can learn or do something. "Delays dominate over weights" I wrote different times.
With the thumb experiment in 1992, the author noticed rather accidentally, that pulses in connection with very low conduction speeds of nerves produce an unknown type of communication and information processing. The delay of needle-sharp pulses means that information can only be processed where pulses meet. Temporal patterns thus become spatial codes. Pulse waves propagate through various nerve fibers. Wherever a pulse wave interferes with itself or with another, its goal has been reached. Or where different wave fronts meet.
In order to get further in terms of ideas , a wave theory in the time domain had to be developed as well as a wave theory on discrete and inhomogeneous spaces . This is where the term 'Wave Interference Networks' came from. Ideas on the way to this unknown computer science are outlined in the manuscript.
In 1992, research on "Neural Networks" (NN, now known as Artifical Neural Nets ~ ANN) had reached a rock bottom. Faith and funding slowly dried up. Technicians increasingly rejected ANN because their learning behavior was not verifiable, whereas the biologists' approach was too mystical, see quote above. Partly catastrophic learning successes brought the end. What remained was ANN and, as a mathematical IT discipline, the so-called connectionism . The NN lacked something for the interpretation of nerve networks. Unity clocked "neural networks" violate the spacetime structure of a neural network to be modeled, which leads to catastrophic modeling errors right from the start.
Digital filters use the temporal dimension, networks the spatial one. In the nerve network, however, both dimensions are linked: the greater the length of an axon or dendrite, the greater the distance between two neurons, the greater the delay times. Extremely slow speeds together with pulse-like signals, ensure that, unlike ANN, nerve networks have to get by without a clock. How can such systems work?
Nervous networks consistently represent unsynchronized race circuits. Wherever a pulse meets its brothers, the goal is reached. That means: the delay time structure of the network alone defines the sender and recipient! Bits cannot be added on such networks. Only images or (pictorial) characters are transferable. Interference networks are bound to projections (optical type). And these are basically mirror-inverted.
You can also use them to build n-dimensional digital filters, think of three-dimensional FIR- or IIR-filters. In order to better recognize properties, the term "interference", known from optics, seemed useful as the lowest common denominator. Information is processed where wave peaks arrive in high, relative simultaneity. As a result, these networks were called "interference networks" (IN) by the author from 1996 onwards.
Already in 1992, the realization had matured, that pulse figures of nervous type, if at all, can map only mirrored (see cover page). As an addressing principle in neural spaces, the thumb experiment was used to demonstrate the relativity of pulse propagation, see Chapter 6 of the thumb experiment from December 16, 1992. Unknown aspects of neural computer science far away from neural networks or Boolean algebra became apparent.
In practice, mirror-inverted images were known from optics and from nerve experiments (Penfields Homunculus, Jeffress sound location), but they could not be discovered in the literature of neurocomputing, which at that time already comprised hundreds of thousands of articles and thousands of books. In terms of system theory, something was wrong with the so-called neural networks. Research began.
With the discovery of mirror-inverted pulse images, it became necessary to explore the physically real possibilities of these "delay networks" and to explore their peculiarities (zooming, movement, interference overflow, connection and decomposition, overdetermination, n-dimensionality, space-time coding, neighborhood inhibition, bursts etc.).
These investigations were extremely successful. They led to the manuscript in no time. For example, "seeing" and "hearing" merge through investigations into self-interference (vision cards) and external interference (hearing cards). This knowledge formed the basis for the development of the first acoustic images and films between 1995 and 1996 and acoustic imaging par excellence (Acoustic Camera) .
Actually only intended as a reminder, it was necessary to sketch in the shortest possible time the approximate direction of a paradigm shift from a mathematical to a physical, wave-theoretical view of nervous networks (pulse waves on ionic channels).
Interference networks (IN) can be discovered in a variety of tasks, from optics to digital race-circuits, radar, sonar, GPS, beamforming, neural networks and signal processing. From this point of view, digital circuits, state machines, digital filters, pattern or weight networks (ANN) represent IN-subgroups with discrete timing. Nerve networks are only used as a synonym for sketching a vision of a more abstract system theory, that of interference networks. The diversity of the areas of knowledge concerned literally pushed for a theoretical basis of a more abstract nature. Like digital filters, Boolean algebras are only a sub-area of interference networks.
There are some names in the book that must be considered inappropriate today. For example, Teuvo Kohonen ** questioned the use of the term "convolution" in 1995 (e.g. "Faltung", KA06.pdf, page 147). Here we come across a peculiarity of interference systems, which may be the reason for the hurdle-rich access.
While the multiplicative, one-dimensional interference of two impulses on an electrical wire has the analogy of the mathematical convolution (here we can fold the time axis), we do not use the term in two-dimensional or higher-dimensional space the convolution completely, since no convolution of the time axis can be carried out here. For this purpose, the "mask algorithm" was introduced, which also includes the one-dimensional convolution integral as an interference integral. More, modeling the sciatic nerve experiment with wave deletion, Chapter 6, p.144, convolution- and interfrence-integrals refuse too.
If we take time functions between the generating space and the detecting space as so-called channel data, the question of the predictability of the images in both spaces arises. If we want to calculate the generator space, we speak of (non-reversed) reconstruction, if the detector space is to be calculated, of (mirror-inverted) projection. Both differ only in the direction of the time axis or in the direction of the delays. PSI-Tools or NoiseImage only calculate the reconstruction integral, in order to calculate the projection, the time axis could be inverted with PSI-Tools (function removed in NoiseImage).
The approach for projection and reconstruction (as a basis, for example for acoustic photo and cinematography) was laid with this book, see mask algorithm, Chapter 14, p.284. For more information, see reversed interference projections of type f(t-T) , or right-sided interference reconstruction of type f(t+T) .
Since publications of algorithmic nature have been against the commercial usability of the results since first acoustic images appeared, these were only sparse.
In brief, a key statement of the manuscript reads as follows: Nerve networks can only be adequately simulated with a three-dimensional, electrical network simulation. Each network node requires spatial coordinates. Each branch needs a delay. All delays that can be read from the three-dimensional structure of the nerve network must be mapped very precisely: These essentially form the function ("form codes behavior"). In addition, static (stimulating or inhibiting) synapses and threshold value parameters must of course be observed. Wave-deletion on bidirectional branches needs to be modelled. How, is not cleared.
The first application of the book for acoustic imaging showed success just two years later (1995): the world's first acoustic images and films were created with the software "PSI-Tools" (Parallel and Serial Interference Tools, Sabine Höfs & Gerd Heinz).
The book manuscript including all formulas and pictures was written with Lotus AmiPro3.1 under Windows3.1 (1988-1994). Unfortunately, paragraph formatting only worked correctly up to Windows98. Corrections could therefore only be incorporated up to the turn of the millennium. The date in the cover was probably accidentally set to 'File Creation Date'. The manuscript was created with own resources. It had to be ended in May 1993, as a job with the employer GFaI e.V. was due from 1.6.1993. Smaller additions and corrections followed until the beginning of 1994 (e.g. the section on the barn owl had been added in Chapter 1 "Jeffress delay model 1948"). 1993, Mark Konishi published the thoughts of his teacher Jeffres on hearing location parallel to the NI manuscript ***.
The original chapter 10 (Interference Logic) failed in the approach. Mathematical modeling was attempted here, which turned out to be too narrow. After simulative verifications with Peter Puschmann and Gunnar Schoel (FHTW 1994), the chapter was later exchanged for the chapter "Elementary functions of the neuron", see also the original table of contents from 1993 (there are also old references in the index).
The book was actually written as a working manuscript. Connections and ideas should not only be noted on a sketch pad. Including all the pictures and formulas written in one hundred days (January 1993 to May 1993), the details are partially immature, the formulations are still uncertain, and every now and then it is euphoric without the reader always being able to understand it. It clearly shows the turmoil in which a new field of knowledge is unfolding. In short: one misses the rounding of mature works. Nevertheless, it still seems worth reading today. Many general findings are still brand new.
To speak with Thomas S. Kuhn*: In retrospect, the manuscript shows the obstacles on the way, but not the brilliance of the abstractions. It is more suitable for science historians than for students. Nevertheless: It is the book to which we owe acoustic imaging and to which we could gradually owe a consolidation of neuroscience: If only these basics were teached.
The problem: biologists and physicians only have rudimentary physics, computer science and math education; physicists don't know about neuro-anatomy. This is extremely sad because an understanding of the nervous system is only possible when the neuroscientist has mastered these fundamentals all together. New research remains fragmentary as long as interference networks are not understood. Funders should gradually become aware of this. You cannot build a chair without knowledge and tools of the carpenter's trade.
Since no real book has yet been written (it would be too early for that), but I keep receiving requests for explanatory materials on interference networks, this working manuscript will remain on the web as long as nothing better is available.Sometimes the journey is the goal.
* Kuhn, Thomas Samuel: The Structure of Scientific Revolutions. Uni Chicago Press, 1962
Access No. since last counter reset