• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Merged Artificial Intelligence Research: Supermathematics and Physics

Read my post again. That was an irrelevant paper because there were no manifolds from math or physics in it, just a "data manifold".

Checked the next paper and you are wasting everyone's time - no mention of manifolds and an unsupported assertion:
12 October 2017: Cite the definition of disentangling factors that states that it always has math or physics manifolds in it.

The things I stroke through above, stemmed from your ignorant highlighted statement.

Note that I didn't say that if you were disentangling factors, you were only in the world of manifolds.

Most importantly, in those papers, it is clear that manifolds are being discussed, some key give aways are mentions of "manifolds", and "disentangling factors" or "disentangling" or "disentangled".

ProgrammingGodJordan said:
 
Last edited:
A thing being central to field A does not automatically make it central to another field. Physicists may work with the statistics of bosons and fermions so:
12 October 2017: Cite the use of Fermi or Bose statistics "to study consciousness/artificial general intelligence" :p!

I don't detect fermi usage, in agi...

[IMGw=180]https://i.imgur.com/rp1IMhq.jpg[/IMGw]

But one shouldn't be quick to call things immutably separate, there were probably people who just didn't detect the correlation between things such as mean field theory and machine learning. (Mean field theory in machine learning paper here)
 
Last edited:
Max Tegmark, probably has a better grasp than you on this topic, and as Max Tegmark expressed in a youtube video here, physicists have long neglected to define the observer in much of the equations. (The observer being the intelligent agent)

With no offense. why should they? Their purpose by their choice is to study real physics, not the brainwaves of those who do the actual work of physicists.

On the bright side, you may want to get with Kumar on this and related.
 
[IMGw=260]https://i.imgur.com/MrxleHs.jpg[/IMGw]

That paper was merely a sample from much work being done regarding manifolds.

Here is a tip. When you see "disentangling factors" in relation to machine learning you are in the world of manifold learning.


Early Visual Concept Learning with Unsupervised Deep Learning, (June 2016)

Exponential expressivity in deep neural networks through transient chaos, (June 2016)

Disentangling factors of variation in deep representations using adversarial training, (November 2016)

...


Footnote:

[qimg]https://i.imgur.com/58zzIPo.png[/qimg]

It is not "irrelevant" that manifolds are central to physics and mathematics.
That physicists may work with these things, is quite relevant if they are to utilize this properties to study consciousness/artificial general intelligence.

Manifolds may afford models degrees of freedom (learning position, scale, size, etc), in temporal difference scenarios.

Not really.....hope that helps - and leads you away from distractions!!!
 
@Realitycheck
[IMGw=180]https://i.imgur.com/0SKFCTP.png[/IMGw]

Although Max Tegmark could be wrong on this; I may not research directly more on the matter of consciousness' requirements in physics, but instead physics' requirement in developing artificial consciousness, i.e. I shall focus on striving to contribute to the development of artificial general intelligence, which is probably guaranteed to be a type of meta solution to much of humanity's issues, including illness diagnosis or medicine development and the advancement of the rate of physics experimentation ...
 
..........However, some common sense tells that if you think about it logically, consciousness, that is the mechanism which allows the observer to achieve complex goals in a general learning manner, i.e. general intelligence, is reasonably nothing but some information driven process, bounded by the laws of physics, and thereafter, not surprisingly applicable in physics...

Applicable in physics, sure. A subject for study by physicists? Well that isn't so clear-cut. Science divides itself up into smaller and smaller specialities, and so there are experts in consciousness and others in artificial intelligence whose work may be of interest to physicists.......without it being necessary or even productive for non-specialists (including physicists) to do the work themselves.
 
...snipped inane image...
Although Max Tegmark could be wrong on this; I may not research directly more on the matter of consciousness' requirements in physics, but instead physics' requirement in developing artificial consciousness...
Max Tegmark could be right or wrong because this his opinion, i.e. not textbook or consensus physics. But if you do not want to learn what he said the why cite it?
Why does deep and cheap learning work so well? is not about any physics requirement
We show how the success of deep learning could depend not only on mathematics but also on physics: although well-known mathematical theorems guarantee that neural networks can approximate arbitrary functions well, the class of functions of practical interest can frequently be approximated through "cheap learning" with exponentially fewer parameters than generic ones. We explore how properties frequently encountered in physics such as symmetry, locality, compositionality, and polynomial log-probability translate into exceptionally simple neural networks. We further argue that when the statistical process generating the data is of a certain hierarchical form prevalent in physics and machine-learning, a deep neural network can be more efficient than a shallow one. We formalize these claims using information theory and discuss the relation to the renormalization group. We prove various "no-flattening theorems" showing when efficient linear deep networks cannot be accurately approximated by shallow ones without efficiency loss, for example, we show that n variables cannot be multiplied using fewer than 2^n neurons in a single hidden layer.
This is about efficiency.
 
Max Tegmark could be right or wrong because this his opinion, i.e. not textbook or consensus physics. But if you do not want to learn what he said the why cite it?
Why does deep and cheap learning work so well? is not about any physics requirement

This is about efficiency.

And what makes you think efficiency and Physics are separate?

Anyway consider this quote from the paper:

"We explore how properties frequently encountered in physics such as symmetry, locality, compositionality, and
polynomial log-probability translate into exceptionally simple neural networks."

"We further argue that when the statistical process generating the data is of a certain hierarchical form prevalent in physics and machine-learning, a deep neural network can be more efficient than a shallow one.

We formalize these claims using information theory and discuss the relation to the renormalization
group" .

So, it is about physics.. Did you actually read more than O(log(n)) of the paper?
 
Last edited:
Applicable in physics, sure. A subject for study by physicists? Well that isn't so clear-cut. Science divides itself up into smaller and smaller specialities, and so there are experts in consciousness and others in artificial intelligence whose work may be of interest to physicists.......without it being necessary or even productive for non-specialists (including physicists) to do the work themselves.

Refer to my following quote:

@Realitycheck
[IMGw=180]https://i.imgur.com/0SKFCTP.png[/IMGw]

Although Max Tegmark could be wrong on this; I may not research directly more on the matter of consciousness' requirements in physics, but instead physics' requirement in developing artificial consciousness, i.e. I shall focus on striving to contribute to the development of artificial general intelligence, which is probably guaranteed to be a type of meta solution to much of humanity's issues, including illness diagnosis or medicine development and the advancement of the rate of physics experimentation ...
 
12 October 2017: Cite the use of Fermi or Bose statistics "to study consciousness/artificial general intelligence" :p!
Manifolds being used in physics does not automatically mean that they can be used in consciousness/artificial general intelligence. That is why there is the manifold hypothesis (not theory or mechanism. etc.)

More seriously
12 October 2017: Cite the definition of disentangling factors that states that it always has math or physics manifolds in it.

You need to cite where I supposedly used the word "always" as you claimed above.

Also, refer to this.
 
Three threads about similar topics in artificial intelligence research have been merged. Any discussion of these or related topics should be confined to this thread. Starting new threads may incur further mod action. If you are unsure whether a post should go here or in a new thread, please PM the moderating team. Thank you.
Posted By: Loss Leader
 

Back
Top Bottom