On this occasion we will describe the foundations of some areas that interest me, the vegetable and foot plants, the foundations of the shelter, computing and neural networks in artificial intelligence. In my opinion, all of them have something in common, forcing the limits of language can sometimes make us detect similar forms.
What do they share? They load and build on them. Its shapes completely determine anything above.
Roots: Plants obtain water and minerals from them as well as the support of the rest of their bodies. There are many types, some of them, like those of orchids, are capable of photosynthesis, others are sensitive enough to flee from The Light and bury themselves as much as possible.
Feet: The feet support the living bodies of many beings. Their shape allows loading them, even without being so extensive. Many of them are very sophisticated balance sensitivity tools. Its inner nerve structure has high-speed highways to our brain. Do you know how much neurocomputing we dedicate to them? The shape of the homunculus answers this question. Unlike other foundations, it is their softness that gives them such versatility.
Footings: The strip footing is one of the most widely used foundation techniques. It consists of the stacking of stone and something that agglutinates it. It is normally located just below most of the walls or columns that will support the structure of the shelter or building. The cubes are isolated square footings that allow the load of a building to be supported when it is required to be elevated.
Bits: Most programming, especially digital architects, is based on the bit. This can be a 0 or 1. With it you can save 2 different things and with several of them all their possible combinations. An 8-bit integer for example is one of the most common primitive data structures and can represent any number from 0 to 2^8 -1, from them or collections of them it is possible to imagine anything representable on your computer.
Perceptron: It is one of the simplest ways to spread information. f(WX + B) , where f() is an activation function. WX is the dot product of the weight vector W times the signal vector X that arrives at the neuron. The perceptron is activated depending on what has propagated to it multiplied by the weights of the incoming edges and based on this result it emits a signal, the output, how much signal it emits depends on the activation function, the output edge. Normally these activation functions boost or cut their argument. B is an extra value that helps bias the activation.
a theorem that says that with enough perceptrons organized in a network you will be able to simulate any possible pattern. The universal approximation theorem, obviously this network must be wide enough and or deep depending.
Here ends the foundation of the information architecture column. Thanks to the architects and civil engineers for their concepts and teachings.