ProgrammingGodJordan
Banned
Part A - Artificial Intelligence and human-kind, in 2 sentences.
Artificial Intelligence is unavoidably exceeding humans in cognitive tasks, and some projections observe human level brain power in artificial machines/software by at least 2020 (Wikipedia exascale computing source).
Artificial Intelligence is already solving many of human kind's problems.
Part B - Crucial difference between Edward and Tegmark
Edward Witten is quite the human being/physicist.
Max Tegmark is also, quite the human/cosmologist.
Both have phd physics degrees.
The urgent difference?
(1) Max presents consciousness as a mathematical problem... Although Max Tegmark is not an artificial intelligence pioneer nor is officially trained as an artificial intelligence researcher, Max is already contributing important work, helping to organize the theory of deep learning (A hot paradigm in Artificial Intelligence now).
A sample of Max's AI work:https://arxiv.org/abs/1608.08225
Max describing consciousness as a mathematical problem: https://www.youtube.com/watch?v=GzCvlFRISIM
(2) Edward Witten believes we will never truly understand consciousness...
https://www.youtube.com/watch?v=hUW7n_h7MvQ
https://futurism.com/human-level-ai-are-probably-a-lot-closer-than-you-think/
Part C - How components approached by Edward's genius applies in AI today
Edward Witten's work concerns some deep stuff on manifolds. (Sample:
https://arxiv.org/abs/hep-th/9411102)
In artificial intelligence, models are observed to be doing some form of manifold representation, especially in the euclidean regime. (And are already demonstrated to be strong candidates for 'disentangling problems' of which many problem spaces occur)
As an unofficial AI researcher myself, I am working on AI, as it relates to super-manifolds.(I recently invented something called 'thought curvature', involving yet another invention of mine called the 'supermanifold hypothesis in deep learning', built atop Yoshua Bengio's manifold work)
So I happen to have a brief, concise description somewhere, where manifolds are shown to non-trivially relate to artificial intelligence (you can see also Deep Learning book by bengio, or Chris Olah's manifold explanation):
Points maintain homeomorphisms, such that for any point p under a transition T on some transformation/translation (pertinently continuous, inverse function) t, p0 (p before T) is a bijective inverse for p1 (p after T); on t.
Following the above, topologies maintain homeomorphisms, for any collection of points W (eg a matrix of weights), under some transition T on some transformation/translation sequence (pertinently continuous, inverse functions) s, W0(W before T) is a bijective inverse for W1(W after T); on s, where for any representation of W, determinants are non-zero.
Now, topological homeomorphisms maintain, until linear separation/de-tangling, if and only if neural network dimension is sufficient (3 hidden units at minimum, for 2 dimensional W)
Otherwise, after maintaining homeomorphism at some point, while having insufficient dimension, or insufficient neuron firing per data unit, in non-ambient isotopic topologies that satisfy NOTE(ii): W shall eventually yield zero determinant, thus avoiding linear separation/de-tangling. At zero determinant, unique solutions for scalar multiplications dissolve, when the matrix becomes non-continuous, or non-invertible.
NOTE(i): The state of being "ENTANGLED" is the point before which some de-tangleable classes are de-tangled/made linearly separable.
NOTE(ii): Unique solutions in matrices are outcomes that resemble data sets; for homeomorphisms (topologies: where zero-determinant continuous invertible transformations/translations engender OR ambient isotopies: where positive/nonsingular determinants, nueron permutations, and 1 hidden unit minimum occurs, i.e for 1-dimensional manifold, 4 dimensions are required)
https://www.quora.com/What-is-the-Manifold-Hypothesis-in-Deep-Learning/answer/Jordan-Bennett-9
Some months ago, I had personally contacted Witten, advising him that his genius could apply in AI. (No response though)
Why does Edward Witten allow his belief (as shown in the video above) to block himself from possibly considerably contributing to artificial intelligence, one of human-kind's most profound tools, even despite contrasting evidence that manifolds apply in machine learning?
///
Artificial Intelligence is unavoidably exceeding humans in cognitive tasks, and some projections observe human level brain power in artificial machines/software by at least 2020 (Wikipedia exascale computing source).
Artificial Intelligence is already solving many of human kind's problems.
Part B - Crucial difference between Edward and Tegmark
Edward Witten is quite the human being/physicist.
Max Tegmark is also, quite the human/cosmologist.
Both have phd physics degrees.
The urgent difference?
(1) Max presents consciousness as a mathematical problem... Although Max Tegmark is not an artificial intelligence pioneer nor is officially trained as an artificial intelligence researcher, Max is already contributing important work, helping to organize the theory of deep learning (A hot paradigm in Artificial Intelligence now).
A sample of Max's AI work:https://arxiv.org/abs/1608.08225
Max describing consciousness as a mathematical problem: https://www.youtube.com/watch?v=GzCvlFRISIM
(2) Edward Witten believes we will never truly understand consciousness...
https://www.youtube.com/watch?v=hUW7n_h7MvQ
https://futurism.com/human-level-ai-are-probably-a-lot-closer-than-you-think/
Part C - How components approached by Edward's genius applies in AI today
Edward Witten's work concerns some deep stuff on manifolds. (Sample:
https://arxiv.org/abs/hep-th/9411102)
In artificial intelligence, models are observed to be doing some form of manifold representation, especially in the euclidean regime. (And are already demonstrated to be strong candidates for 'disentangling problems' of which many problem spaces occur)
As an unofficial AI researcher myself, I am working on AI, as it relates to super-manifolds.(I recently invented something called 'thought curvature', involving yet another invention of mine called the 'supermanifold hypothesis in deep learning', built atop Yoshua Bengio's manifold work)
So I happen to have a brief, concise description somewhere, where manifolds are shown to non-trivially relate to artificial intelligence (you can see also Deep Learning book by bengio, or Chris Olah's manifold explanation):
Points maintain homeomorphisms, such that for any point p under a transition T on some transformation/translation (pertinently continuous, inverse function) t, p0 (p before T) is a bijective inverse for p1 (p after T); on t.
Following the above, topologies maintain homeomorphisms, for any collection of points W (eg a matrix of weights), under some transition T on some transformation/translation sequence (pertinently continuous, inverse functions) s, W0(W before T) is a bijective inverse for W1(W after T); on s, where for any representation of W, determinants are non-zero.
Now, topological homeomorphisms maintain, until linear separation/de-tangling, if and only if neural network dimension is sufficient (3 hidden units at minimum, for 2 dimensional W)
Otherwise, after maintaining homeomorphism at some point, while having insufficient dimension, or insufficient neuron firing per data unit, in non-ambient isotopic topologies that satisfy NOTE(ii): W shall eventually yield zero determinant, thus avoiding linear separation/de-tangling. At zero determinant, unique solutions for scalar multiplications dissolve, when the matrix becomes non-continuous, or non-invertible.
NOTE(i): The state of being "ENTANGLED" is the point before which some de-tangleable classes are de-tangled/made linearly separable.
NOTE(ii): Unique solutions in matrices are outcomes that resemble data sets; for homeomorphisms (topologies: where zero-determinant continuous invertible transformations/translations engender OR ambient isotopies: where positive/nonsingular determinants, nueron permutations, and 1 hidden unit minimum occurs, i.e for 1-dimensional manifold, 4 dimensions are required)
https://www.quora.com/What-is-the-Manifold-Hypothesis-in-Deep-Learning/answer/Jordan-Bennett-9

Some months ago, I had personally contacted Witten, advising him that his genius could apply in AI. (No response though)
Why does Edward Witten allow his belief (as shown in the video above) to block himself from possibly considerably contributing to artificial intelligence, one of human-kind's most profound tools, even despite contrasting evidence that manifolds apply in machine learning?
///
I have edited this post. The original made excessive use of large fonts and of white space and rose to the level where its formatting was disruptive to the forum. Don't do that. The approach didn't work for Time Cube, and it is not acceptable here.
By the way, the mix of bold, highlight, and red are problematic, too. Let's be more conservative in our typography, please.
By the way, the mix of bold, highlight, and red are problematic, too. Let's be more conservative in our typography, please.
Replying to this modbox in thread will be off topic Posted By: jsfisher
Last edited by a moderator: