> This aligns with number theory conjectures suggesting that at higher orders of magnitude we should see diminishing noise in prime number distributions, with averages (density, AP equidistribution) coming to dominate, while local randomness regularises after scaling by log x. Taken together, these findings point toward an interesting possibility: that machine learning can serve as a new experimental instrument for number theory.
n*log(n) spacing with "local randomness" seems like such a common occurrence that perhaps it should be abstracted into its own term (or maybe it already is?) I believe the description lengths of the minimal programs computing BB(n) (via a Turing machine encoding) follow this pattern as well.
Huh. That's an interesting possible metric. How many competing tendencies in a space? It a good question and one that's been asked before.
I wonder how machine learnability compares to other measures of chaotic structure, like multi-fractal approaches etc. I wouldn't be that surprised if it's accidentally the same or quite similar to some of the existing metrics.
I think it's pretty obvious that the amount of order/entropy in a system corresponds to the amount of knowledge contained in / that can be able extracted from a system. This is a fundamental aspect of information theory.
From the abstract:
> This aligns with number theory conjectures suggesting that at higher orders of magnitude we should see diminishing noise in prime number distributions, with averages (density, AP equidistribution) coming to dominate, while local randomness regularises after scaling by log x. Taken together, these findings point toward an interesting possibility: that machine learning can serve as a new experimental instrument for number theory.
n*log(n) spacing with "local randomness" seems like such a common occurrence that perhaps it should be abstracted into its own term (or maybe it already is?) I believe the description lengths of the minimal programs computing BB(n) (via a Turing machine encoding) follow this pattern as well.
Huh. That's an interesting possible metric. How many competing tendencies in a space? It a good question and one that's been asked before.
I wonder how machine learnability compares to other measures of chaotic structure, like multi-fractal approaches etc. I wouldn't be that surprised if it's accidentally the same or quite similar to some of the existing metrics.
I think it's pretty obvious that the amount of order/entropy in a system corresponds to the amount of knowledge contained in / that can be able extracted from a system. This is a fundamental aspect of information theory.
One mans' "pretty obvious" is another's "true by definition". It's not a "fundamental aspect" so much as it's just what those words mean.
Yes, that's a tighter way of saying it :)