I know the title makes this seem like a rather grumpy question — neither of me have any reason to think there is some fundamental connection, and I’ve been wondering this idly for a while. Note that I don’t work with block ciphers very often, so feel free to tell me if I’m not on base at all. A standard way to create a block cipher is to repeat a round function, which is generally written as a combination of: 1. A linear layer (which creates “confusion”) 2. A nonlinear transformation ( which allows for ‘diffusion’) I think there is usually an explicit third step here that changes the subkey used for the round in question — I will ignore this though, as it makes my analogy worse (I think). Either way, the above round feature has been repeated enough to get a comfortable margin of safety — often when the number of rounds is reduced there are attacks. Neural networks are a well-known technique to achieve universal function approximation. They are generally written by repeating a construction, which is a combination of: 1. A linear layer 2. (fixed) A non-linear “activation function” I don’t understand this area much, but my impression is that this construction repeats is enough to get a “good enough” job approach. What I find so strange about these two paradigms is about how ‘opposite’ their goals are: block ciphers try to produce functions that are in no way ‘continuous’, while neural networks are used to create arbitrary continuous functions. Either way, the question is whether there is a formal link between the above two design paradigms, or whether it is a coincidence (which seems perfectly plausible).
Cryptoplatforming.com is a news websites which gets news around the globe on investing in Crypto. Our news has no backgroundcheck.