That sounds a lot like a two-state transistor to me.. Either off or on.
Transistors aren't just either on or off. They are analog devices whose output is proportional to the input. That is why they make excellent amplifiers. However a digital computer needs to have each node in only one of two states, 'on' and 'off'. To do this the circuit is arranged so the transistors will 'flip' from one state to another and stay there.
The fundamental circuit to store a single bit is called a 'flip-flop', where the output of one transistor is fed into the input of another one so that each one holds the other in the opposite state. However a practical circuit also needs a method of addressing and flipping the state, which generally requires 6 transistors per bit. This is called 'static' memory because the flip-flop stays in one state forever unless deliberately changed or the power is removed. These bits are often arranged in groups called 'registers' to represent larger numbers, eg. 8 bits (2
0 to 2
7) representing a number between 0 and 255.
But using 6 transistors per bit is wasteful, so another type of storage element was invented called 'dynamic memory' that stores the bit as charge on the Gate of a FET (
Field Effect Transistor). Problem is the charge gradually leaks away, causing the FET to 'lose its memory' after a few fractions of a second. To avoid this each cell in the memory is periodically read and the charge replenished - thus the operation is 'dynamic' rather than 'static'.
I would agree that neurons appear more complex than transistors, but in the end their action seems no different.
There is a similarity, but it's not 'no different'.
The circuitry in a digital computer is carefully designed to ensure that nothing ever changes state accidentally. A single bit error will make it screw up unpredictably. As this absolute reliability is difficult to achieve in dynamic memory, modern computers often have 'error correcting' memory that is able to detect and repair any single bit error provided it doesn't happen too often.
Neurons don't work like that. They are inherently 'noisy' and prone to firing even without stimulus. This makes neurons more analog-like. A strong stimulus will make them fire more often. The brain then does a 'statistical analysis' of the information its getting, as opposed to the bits in a computer whose states are all equally (and vitally) important.
A digital computer made from neurons would be totally unreliable. A brain made from flip-flops expecting accurate data input would also be useless. A computer can simulate the statistical operation of the brain and nervous system, but this requires a huge number of computations.
Another problem is that the computer calculates one thing after another sequentially, whereas the brain is getting large amounts of information in parallel. In some cases - eg. the eye - a certain amount of 'processing' is done by the sensors themselves. The eye actually detects edges of shapes and color changes before sending impulses down the optic nerves. This method is also used in machine vision systems, where the image is divided into small areas which are preprocessed in parallel by
GPUs with thousands of cores.
Trying to emulate the brain and nervous system with a conventional computer is a losing battle because you are attempting to make a machine that works sequentially with total accuracy act like a system that operates statistically on massive amounts of parallel data. The answer to this is to create
hardware structures that are closer to the brain and nervous system.
As the size of transistors in ICs get smaller to achieve higher density, it becomes harder to maintain absolute reliability. However for 'AI' we might not need that reliability. Modern memory cards use multiple charge levels on the FETs to increase the data density, then apply error correction to deal with the inevitable errors that occur. For a neural network that error correction may not be necessary because the
system is designed to tolerate it. It might not even require a CPU running program code to do its thing. With the right interconnections between 'neurons' a chip could perform the required processing itself, much like the eye processes images without doing any 'thinking'.