Showing posts with label Virtual Reality. Show all posts
Showing posts with label Virtual Reality. Show all posts

Monday, February 05, 2018

Simulation Hypotheses

The simulation hypothesis proposes that all of reality, including the earth and the universe, is in fact an artificial simulation, most likely a computer simulation. Some versions rely on the development of a simulated reality, a proposed technology that would seem realistic enough to convince its inhabitants the simulation was real. The hypothesis has been a central plot device of many science fiction stories and films.

Sunday, February 04, 2018

Wednesday, June 15, 2016

Further on Base Reality

Just trying to see the context of possibilities under a paradigmatic viewing of a simulation of Space and Time.

3. Wheeler. His phrase “It from Bit” implies that at a deep level, everything is information.

“Physicists have now returned to the idea that the three-dimensional world that surrounds us could be a three- dimensional slice of a higher dimensional world.” (L. Randall, 2005) p52

If you probe the black hole with more energy, it just expands its horizon to reveal no more, so what occurs below the Planck length is unknown.
There is a Big Rip versus Big Bang as to the beginning, and at a fundamental level the question that such a base reality had to emerge from a quantum reality.

Quantum Realism: Every virtual reality boots up with a first event that also begins its space and time. In this view, the Big Bang was when our physical universe booted up, including its space-time operating system. Quantum realism suggests that the big bang was really the big rip.
See number 10

Its not really a God of the gaps but a limitation on what we can know, so quantum realism sets in as to what the base reality arises from. Yet, information, is being released from the black hole.

Such visualization of what is happening between a Q <-> Q measure is a visualization of change of energy(pulling apart)as the idea of distance requiring greater energies. Reduction-ism had run its limits in the black hole, so what happens inside it looked at from the boundary.

Theoretical excursions from standard views do represent paradigmatic changes regarding cognitive function, as a slip between two changes in our thought functions(the vase) and Kuhn is worth repeating.

But more then that, regardless of these new views, things already existed in place for me governing sources that were attained in my own explorations of consciousness, so these in the view of this paradigmatic change, have new words covering what existed as experience before this simulation hypothesis an access too, what is believed to be quantum realism, as an informational source. You show two medium's using some of the changes in views. This is a method I am using as well.

So that "one thing" is quite wide and pervasive in my opinion as to all information existing...much like...Jung's collective unconscious.....which settle's through the tip of the pyramid as a inverse action of what is emergence, and as, what becomes the base reality. A schematic transference of the idea settling into the nature as a base of the pyramid. It would seem correlative toward Plato point up to that one thing. Quantum realism? How ideas filter down into the base reality.

So it really is an interactive exchange, as an inductive/deductive approach toward reality. This is part of my discovery regarding the geometrical underpinnings I mentioned. This development of thought is being seen as I learn of what is going on in my views of science today.

So this has become part of my work to understand how mathematics being developed along with quantum theory is being geometrically realized, as an excursion traveling deep into consciousness that I explored regarding mandala interpretations of experiences attained in lucid dreaming. Layers of interpretation.

Long before understanding and learning of Euclid, I was drawing images of point line plane, and images developed further from this.

It was interesting to see the idea of illusion set in as what was once seen and what is true as a picture present as part of the video demonstration in a follow up. This somehow fits with the idea of quantum cognition as it is seen at these subtle shifts of Paradigmatic change in my view.

Such acceptance proliferates to a new perspective and acceptance of the world seen as the simulation hypothesis. This to me would explain part of one's acceptance of the hypothesis since 2009.


See Also:

  • Base Reality
  • Wednesday, June 08, 2016

    Base Reality?

    Elon Musk, "There’s a one in billions chance we’re in base reality" Written by Jason Koebler, Motherboard.
    The strongest argument for us being in a simulation, probably being in a simulation is the following: 40 years ago, we had pong, two rectangles and a dot,” Musk said. “That is what games were. Now 40 years later we have photorealistic 3D simulations with millions of people playing simultaneously and it’s getting better every year. And soon we’ll have virtual reality, augmented reality, if you assume any rate of improvement at all, the games will become indistinguishable from reality.

    Progression can stop in this base reality.....and science goes no further due to a calamity that wipes out the human race?

    A thought that stuck out in my mind.

    As I went through comparative labels,  some things that came up were regarding quantum gravity, or the physics of organic chemistry. How would a simulation hypothesis explain these things.

    Click the image to open in full size.
    This symbol was used to demonstrate in a global sense that everything is derived from bits. Taken from a speech given by John Archibald Wheeler in 1999.  Also from, J. A. Wheeler: Journey into Gravity and Spacetime (Scientific American Library, Freeman, New York, 1990),  pg. 220

     Abstraction lives in the land of the simulations as information for consciousness? It only becomes real,  physically, as a matter orientated state of expression?

     But in the same breathe,

        To my mind there must be, at the bottom of it all,
        not an equation, but an utterly simple idea.
        And to me that idea, when we finally discover it,
        will be so compelling, so inevitable,
        that we will say to one another,
        “Oh, how beautiful !
        How could it have been otherwise?” From a personal notebook of Wheeler circa 1991

    An idea then.

    It was designed by the RobotCub Consortium, of several European universities and is now supported by other projects such as ITALK.[1] The robot is open-source, with the hardware design, software and documentation all released under the GPL license. The name is a partial acronym, cub standing for Cognitive Universal Body.[2] Initial funding for the project was 8.5 million from Unit E5 – Cognitive Systems and Robotics – of the European Commission's Seventh Framework Programme, and this ran for six years from 1 September 2004 until 1 September 2010.[2]

    The motivation behind the strongly humanoid design is the embodied cognition hypothesis, that human-like manipulation plays a vital role in the development of human cognition. A baby learns many cognitive skills by interacting with its environment and other humans using its limbs and senses, and consequently its internal model of the world is largely determined by the form of the human body. The robot was designed to test this hypothesis by allowing cognitive learning scenarios to be acted out by an accurate reproduction of the perceptual system and articulation of a small child so that it could interact with the world in the same way that such a child does.[3]


    See Also:


    Tuesday, January 07, 2014

    Is Reality a Virtual Simulation?

    To my mind there must be, at the bottom of it all,
    not an equation, but an utterly simple idea.
    And to me that idea, when we finally discover it,
    will be so compelling, so inevitable,
    that we will say to one another,
    “Oh, how beautiful !
    How could it have been otherwise?”
    From a personal notebook of Wheeler circa 1991
    Click the image to open in full size.
    This symbol was used to demonstrate in a global sense that everything is derived from bits. Taken from a speech given by John Archibald Wheeler in 1999. Also from, J. A. Wheeler: Journey into Gravity and Spacetime (Scientific American Library, Freeman, New York, 1990), pg. 220
    The Last Question. Of course in science fiction we like to popularize things. As if, the model itself, has yet to become the real thing? Can one say that there model is better, while they say all other models are insufficient? They have to be speaking from a framework right? In that sense Asimov was a visionary, that brought the dream of, to become something real?

    As I was reading I got this impression of the "they( as some grand designer)" as if the designer of the monolith, and those without freewill, apes. I know it's just a story, but the first story to me that somehow as story tellers, we'd given the impression that Hal, was imbued with something more then we had come to know in the beginning days of computer intelligence. In another sense still, moon dwellers, with no freewill.

    So in that sense there was this drive to apply human capabilities to a machine, and thus all humanity expression in terms of this machinist attribute of being? So at some point the Frankenstein(a biological design robot) becomes alive through our efforts to construct this live emotive thing we call a robot.

    So it is as if the simulation had taken on this elevation of sorts, as to say, that the machine had graduated once having realized that such a robot could dream, and thus a culmination of all things possible for a human being, had somehow now become that simulation of reality? A Second Life?

    So we have this godlike power now. And in place of sending machines to distant planets to gather information and to do our bidding in an effort to gather information, some "they" beyond the parameters of our seemingly capable world of science found out, that the biological robot had already been designed? Say what?

    Click the image to open in full size.

    Friday, May 03, 2013

    Generalizations on, It from Bit

    I know there is an essay question going on and I thought it might quite the challenge indeed to wonder and construct from a layman standpoint some of the ideas that are emerging from the challenge.

    The past century in fundamental physics has shown a steady progression away from thinking about physics, at its deepest level, as a description of material objects and their interactions, and towards physics as a description of the evolution of information about and in the physical world. Moreover, recent years have shown an explosion of interest at the nexus of physics and information, driven by the "information age" in which we live, and more importantly by developments in quantum information theory and computer science.

    We must ask the question, though, is information truly fundamental or not? Can we realize John Wheeler's dream, or is it unattainable? We ask: "It From Bit or Bit From It?"

    Possible topics or sub-questions include, but are not limited to:

    What IS information? What is its relation to "Reality"?

    How does nature (the universe and the things therein) "store" and "process" information?

    How does understanding information help us understand physics, and vice-versa?
     See: It From Bit or Bit From It?

    So maybe as I go along it will contrive into something tangible, worth considering. If I show it's production here and it eliminates any possibility of an entrance, then it"s nice that I will have learn something along the way.

    Part 1

    To my mind there must be, at the bottom of it all,
    not an equation, but an utterly simple idea.
    And to me that idea, when we finally discover it,
    will be so compelling, so inevitable,
    that we will say to one another,
    “Oh, how beautiful !
    How could it have been otherwise?”
    From a personal notebook of Wheeler circa 1991

    This symbol was used to demonstrate in a global sense that everything is derived from bits. Taken from a speech given by John Archibald Wheeler in 1999.  Also from, J. A. Wheeler: Journey into Gravity and Spacetime (Scientific American Library, Freeman, New York, 1990),  pg. 220

    So the idea here while starting from a vague representation of something that could exist at of deeper region of reality is ever the question of where our perspective can go. So as if telling a story and reaching for some climax to be reached I ponder a approach. If one goes through the story it is to bring fastidiously a place in the future so as to see where John Archibald Wheeler took us now, in the form of his perspective on Information, Physics and Quantum.

    So indeed I have open on a simplistic level so as to move this story into the forum of how one can might approach a description of the wold so as to say it's foundation has been purposeful and leading. So I will move forward and present a phenomenological approach considered and go backward in time.

    Quantum gravity theory is untested experimentally. Could it be tested with tabletop experiments? While the common feeling is pessimistic, a detailed inquiry shows it possible to sidestep the onerous requirement of localization of a probe on Planck length scale. I suggest a tabletop experiment which, given state of the art ultrahigh vacuum and cryogenic technology, could already be sensitive enough to detect Planck scale signals. The experiment combines a single photon's degree of freedom with one of a macroscopic probe to test Wheeler's conception of "quantum foam", the assertion that on length scales of the order Planck's, spacetime is no longer a smooth manifold. The scheme makes few assumptions beyond energy and momentum conservations, and is not based on a specific quantum gravity scheme See:Is a tabletop search for Planck scale signals feasible? by Jacob D. Bekenstein-(Submitted on 16 Nov 2012 (v1), last revised 13 Dec 2012 (this version, v2))

    The question of any such emergence is then to consider that such examples held in context of the digital world of physics. This is to say it can be used to grossly examine levels to take us to the quest of examining what exists as a basis of reality. So too then,  as to what can be described as purposeful examination of the interior of the black hole.

    Jacob D. Bekenstein

    Such examinations then ask whether such approaches will divide perspective views in science into relations that adopt some discrete or continuum view of the basis of reality. This then forces such division as to a category we devise with respect to the sciences as theoretical examinations and it's approach. So from this perspective we see where John Wheeler sought to seek such demonstration in which to raise the question of a basis of reality on such a discrete approach?

    So such foundational methods are demonstrated then as to form from such developments in perspective. John Wheeler sought to seek a demonstration of the idea  of foundation as an end result by his students. To seek an explanation of the interior of the black hole as to demonstrate such a progeny in his students is to force this subject further along in history. So,  from this standpoint we go back in time.


    Generalizatons on:" It From Bit" Part 2

    John Archibald Wheeler (born July 9, 1911) is an eminent American theoretical physicist. One of the later collaborators of Albert Einstein, he tried to achieve Einstein's vision of a unified field theory. He is also known as the coiner of the popular name of the well known space phenomenon, the black hole. 

    There is always somebody who is the teacher and from them, their is a progeny. It would not be right not to mention John Archibald Wheeler. Or, not to mention some of his students.

    Notable students

    Demetrios Christodoulou
    Richard Feynman
    Jacob Bekenstein
    Robert Geroch
    Bei-Lok Hu
    John R. Klauder
    Charles Misner
    Milton Plesset
    Kip Thorne
    Arthur Wightman
    Hugh Everett
    Bill Unruh

    So it is with some respect that as we move back in time we see the names of those who have brought us forward ever closer to the understanding and ideal of some phenomenological approach so as to say such a course of events has indeed been fruitful. Also, to say that such branches that exist off of John Archibald Wheeler's work serve to remind us of the wide diversity of approaches to understanding and developing gravitational approaches to acceptance and development.

    COSMIC SEARCH: How did you come up with the name "black hole"?

    John Archibald Wheeler: It was an act of desperation, to force people to believe in it. It was in 1968, at the time of the discussion of whether pulsars were related to neutron stars or to these completely collapsed objects. I wanted a way of emphasizing that these objects were real. Thus, the name "black hole".

    The Russians used the term frozen star—their point of attention was how it looked from the outside, where the material moves much more slowly until it comes to a horizon.* (*Or critical distance. From inside this distance there is no escape.) But, from the point of view of someone who's on the material itself, falling in, there's nothing special about the horizon. He keeps on going in. There's nothing frozen about what happens to him. So, I felt that that aspect of it needed more emphasis.

    So as we go back in time we see where certain functions as a description and features of a reality has to suggest there was some beginning. It is also the realization that such a beginning sought to ask us to consider the function and reality of such new concepts so as to force us to deal with the fundamentals of that reality.

    Dr. Kip Thorne, Caltech 13

    So again, as we go back in time we see where such beginnings in sciences approach has to have it's beginning not only as a recognition of the black hole, but of where we have been lead toward today's approach to gravity in terms of what is discrete and what is considered, a continuum. These functions, as gravity, show a certain distinction then in terms of today's science as they exist from John Archibald Wheeler's approach so as to that question to his search for links to, Information, Physics and the Quantum began.

    Friday, July 06, 2012

    The Bolshoi simulation

    A virtual world?

     The more complex the data base the more accurate one's simulation is achieved. The point is though that you have to capture scientific processes through calorimeter examinations just as you do in the LHC.

    So these backdrops are processes in identifying particle examinations as they approach earth or are produced on earth. See Fermi and capture of thunder storms and one might of asked how Fermi's picture taking would have looked had they pointed it toward the Fukushima Daiichi nuclear disaster?

    So the idea here is how you map particulates as a measure of natural processes? The virtual world lacks the depth of measure with which correlation can exist in the natural world? Why? Because it asks the designers of computation and memory to directly map the results of the experiments. So who designs the experiments to meet the data?

     How did they know the energy range that the Higg's Boson would be detected in?

    The Bolshoi simulation is the most accurate cosmological simulation of the evolution of the large-scale structure of the universe yet made ("bolshoi" is the Russian word for "great" or "grand"). The first two of a series of research papers describing Bolshoi and its implications have been accepted for publication in the Astrophysical Journal. The first data release of Bolshoi outputs, including output from Bolshoi and also the BigBolshoi or MultiDark simulation of a volume 64 times bigger than Bolshoi, has just been made publicly available to the world's astronomers and astrophysicists. The starting point for Bolshoi was the best ground- and space-based observations, including NASA's long-running and highly successful WMAP Explorer mission that has been mapping the light of the Big Bang in the entire sky. One of the world's fastest supercomputers then calculated the evolution of a typical region of the universe a billion light years across.

    The Bolshoi simulation took 6 million cpu hours to run on the Pleiades supercomputer—recently ranked as seventh fastest of the world's top 500 supercomputers—at NASA Ames Research Center. This visualization of dark matter is 1/1000 of the gigantic Bolshoi cosmological simulation, zooming in on a region centered on the dark matter halo of a very large cluster of galaxies.Chris Henze, NASA Ames Research Center-Introduction: The Bolshoi Simulation

    Snapshot from the Bolshoi simulation at a red shift z=0 (meaning at the present time), showing filaments of dark matter along which galaxies are predicted to form.
    CREDIT: Anatoly Klypin (New Mexico State University), Joel R. Primack (University of California, Santa Cruz), and Stefan Gottloeber (AIP, Germany).

    Pleiades Supercomputer

     MOFFETT FIELD, Calif. – Scientists have generated the largest and most realistic cosmological simulations of the evolving universe to-date, thanks to NASA’s powerful Pleiades supercomputer. Using the "Bolshoi" simulation code, researchers hope to explain how galaxies and other very large structures in the universe changed since the Big Bang.

    To complete the enormous Bolshoi simulation, which traces how largest galaxies and galaxy structures in the universe were formed billions of years ago, astrophysicists at New Mexico State University Las Cruces, New Mexico and the University of California High-Performance Astrocomputing Center (UC-HIPACC), Santa Cruz, Calif. ran their code on Pleiades for 18 days, consumed millions of hours of computer time, and generating enormous amounts of data. Pleiades is the seventh most powerful supercomputer in the world.

    “NASA installs systems like Pleiades, that are able to run single jobs that span tens of thousands of processors, to facilitate scientific discovery,” said William Thigpen, systems and engineering branch chief in the NASA Advanced Supercomputing (NAS) Division at NASA's Ames Research Center.
    See|:NASA Supercomputer Enables Largest Cosmological Simulations

    See Also: Dark matter’s tendrils revealed

    Thursday, June 21, 2012

    Reality is Information?

    This paper argues that at least one of the following propositions is true: (1) the human species is very likely to go extinct before reaching a “posthuman” stage; (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof); (3) we are almost certainly living in a computer simulation. It follows that the belief that there is a significant chance that we will one day become posthumans who run ancestor-simulations is false, unless we are currently living in a simulation. A number of other consequences of this result are also discussed.
     Nick Bostrom interviewed about the Simulation Argument.

     In this informal interview in Atlanta June 8, 2012, Tom Campbell, author of My Big TOE, expands on the significance of the scientific experiment called the Double Slit in terms everyone can understand.

    " If you understand the Double Slit experiment, you understand how our reality works".
    He continues " Everything we do is not different from the Double Slit experiment".

    This explanation is valuable to scientists as well as the general public. Tom takes a difficult subject and applies helpful analogies to clarify the implications of this scientific experiment.
    See: Tom Campbell: Our Reality Is Information
    Up Date: There is further information later on that Campbell recognizes as needed to further solidify his understanding on Double Slit, so he is open to that, which is good.

    See Also:

    Tuesday, June 12, 2012

    The Heart Connection

    If the heart was free from the impurities of sin, and therefore lighter than the feather, then the dead person could enter the eternal afterlife.

    Many times we talk about the heart as a physical thing that exists within our bodies. That to supplant it with  mechanical means that the person is some how still whole. But it is more then that for me. The individual in terms of the way in which we as human beings send messages through out our bodies, is as quick as the thought in experience,  can encapsulate our emotions and remembrance "as a tool" is a messenger?

    So it is more then the physical aspect of it that is of some importance in my thinking,  that we use it to measure the emotive quality of our being.  To transfer the baser question of our ancient self,  to a more transient capacity of the heart in mind?

    It is understood then that the heart is more the connection between the lower reactive part of our being  and the mind with which intellectually we like to deal with the world. So you say you are quite smart but really without this emotive understanding of our being you really are no smarter, but more a reactive ancient product of the evolution just on an animal scale absent of that heart in mind?

    So I should clarify more with what I am saying about the idea.  I throw out such thoughts into society.  Somehow having them know what I am talking about. I listed a new label as Heart at the bottom of this post so maybe that will help if you are interested on what thoughts I may have about this heart connection.

    So I am going to add some words that may have no comprehension to you. How we change from day to day is how we learn more which helps us to see reality in different ways. Not so localized in the event of your observation that you loose sight of the larger reality? Larger reality?

    Ah, so you have collapse the wave theory? So ya, its all data..... lets not be picky:) The realization is, "the measure." That is why I say certain scientists practise "in measure" and VR. An inevitable consequence of applying science in the virtual way.

    If I ask you who the objectifier is in the case of your acceptance of NPMR(Non Physical Matter Reality) then who is it that has decided that this experience will have been the result of accepted data?

    As a receiver of the data, whether you are aware of it or not, you pick up this information and translate it for an "anticipated future?" How is it you can reason without "not using" the creative imagination here so as to discern the path in life is a consequential and selective one based on the criteria of the life you live now?
    How do you imbue memory with out not making an impression with the use of the emotive element of your being in any moment?

    The creative imagination then is a predictive feature of how we select events within the anticipated future? For all concerned this is an objectified reality for you(who?) whether it may be deemed subjective or not, as an acceptance of the data we process at a much higher level of observation.

    You(who?) already have calculated this and this already exists within the potential of our acceptance. How else could any course of action have been reasoned? This higher function within the context of the subconscious is much more aware of the conditions that exist in the objectified reality as you(who?) see the subtle much clearer then you(everyday life) would by recognizing any dealing in the objectified condense feature? The containment of experience relegates a "kind of gravity" toward the understanding of that which is to aggregate too, is with an emotive realization.

    So I would say the objectified understanding and usage of the creative is much more developed in each of us, to be able to see from the perspective of realizing undercurrents that exist in society. What is the current of the individual on mass as the larger collective gathering of the belief?

    If you do not practise the understanding of the emotive elements of experience then you will not recognize how you send things to the lived past. I would say then that your EQ is much more important then your IQ. Why I discount your "only intellectual reasoning" as prevalent.

    Mr. JOHN A. R. NEWLANDS read a paper entitled "The Law of Octaves, and the Causes of Numerical Relations among the Atomic Weights."[41] The author claims the discovery of a law according to which the elements analogous in their properties exhibit peculiar relationships, similar to those subsisting in music between a note and its octave. Starting from the atomic weights on Cannizzarro's [sic] system, the author arranges the known elements in order of succession, beginning with the lowest atomic weight (hydrogen) and ending with thorium (=231.5); placing, however, nickel and cobalt, platinum and iridium, cerium and lanthanum, &c., in positions of absolute equality or in the same line. The fifty-six elements[42] so arranged are said to form the compass of eight octaves, and the author finds that chlorine, bromine, iodine, and fluorine are thus brought into the same line, or occupy corresponding places in his scale. Nitrogen and phosphorus, oxygen and sulphur, &c., are also considered as forming true octaves. The author's supposition will be exemplified in Table II., shown to the meeting, and here subjoined:--See Also: Dmitri Ivanovich Mendeleev: The Law of Octaves

    For you, since you mentioned Gurdieff and Ouspensky, how does one translate the octaves(Look at the table of elements and Seaborg) is the realization of harmonic empathetic resonance in other things? You have to be able to translate experience? When is it that you are locked in your own frame of reference and how the objectified selection would be to see this experience from another and higher perspective? Is self remembering "a place of equilibrium?" If the flow of information is unimpeded, move from "the birth of a new universe" from the one previous, how is that done?

    You come back to this life "unaware" but you had already set the course of action in the acceptance of reliving life all over again? Why unaware? Imagine then, you set your body of the NPMR within the objectified concrete reality of where you are at now. It is of consequence then that you loose sight of the larger perspective. Emotively we live this way all the time, unaware of the larger perspective. Emotional experience(gravity of the situation) limits that understanding of the larger view because it becomes the lived past?

    You have to "inside" transform the baser emotion from "the four square" to the triangle? In ancient time, the process of four square was an arrangement of astrological process of the Plato's elements. Quintessence comes later? Since I am not an alchemist I would say the understanding is that any baser emotion has to go through this transformation in order to reach Quintessence?

    Hence once this change( our acceptance and our responsibility of being and its never emotively over) takes place, our understanding then is that the experience in reality "is a frame of reference?"

    I had placed this in terms of the Socratic development later, but you might also then see some correspondence toward how experience is based on the emotive hierarchical understanding of placement on that triangle. What does the "tip" represent?

    This then is the transition too, "the heart" in mind.:) How does one then "raise the octave?" The heart then becomes a very important location between the lower and higher centres? This does not say we overcome "the emotive struggle" because it is forever the emotive struggle toward perfecting. Toward reasoning in a "higher emotive mind."

    I will say I was able to identify some correlation to "predictive outcomes by chance" by looking at the I CHING constructive formation. Trigrams and hexagrams as constituents of a the formative experience of an ancient way of thinking. You always had to have "the question in mind" for an outcome to have materialize in chance parameters of how outcome was constructed, as a choice of direction? Yarrow sticks, or, roll the dice.

     Explain logic in the simplest terms possible  as the 'aha moment.'
     It is placed as a philosophical question of how can we simplify that transference....but in context of VR and in measure how can you measure your intuition and recognize the truth in the most simplest of statements?

    Your aha is a immediate recognition of the greater potential as it all comes together in your mind in a split second? There is no possible way to measure that in any VR way?

    Unless, you believe it is the synapse with which such potentials allow information into your brain. You know I mean more then just the matter aspect of the thinking potential. The complete grokking of the subject at hand.

    So the synapse is a portal of a kind in terms of your connection to a vast reservoir of information? If you stand at the portal, and have access to all information, how big is your memory that you can draw from experience? If you are standing "at the portal" then it is about your last question asked. Who is the questioner/observer but to have realized you are an accumulation of the information you have assimilated in life? It is more then the question itself if you understand what I am saying. It is more then a question mark appearing at the end of the sentence.

    Thursday, May 31, 2012

    Mirror Neurons

    Neuroscientific evidence suggests that one basic entry point into understanding others' goals and feelings is the process of actively simulating in our own brain the actions we observe in others. This involves the firing of neurons that would be activated were we actually performing an action, although we are only observing it in someone else. Neurons performing mirroring functions have been directly observed in primates and other species, including birds. In humans, brain activity consistent with "mirroring" has been found in the premotor cortex, the supplementary motor area, the primary somatosensory cortex and the inferior parietal cortex.
    The data revealed that even the most complex, abstract emotions—those that require maturity, reflection, and world knowledge to appreciate—do involve our most advanced brain networks. However, they seem to get their punch—their motivational push—from activating basic biological regulatory structures in the most primitive parts of the brain, those responsible for monitoring functions like heart rate and breathing. In turn, the basic bodily changes induced during even the most complex emotions—e.g., our racing heart or clenched gut—are "felt" by sensory brain networks. When we talk of having a gut feeling that some action is right or wrong, we are not just speaking metaphorically.

    So, I'm saying the mirror neuron system underlies the interface allowing you to rethink about issues like consciousness,representation of self,what separates you from other human beings,what allows you to empathize with other human beings,and also even things like the emergence of culture and civilization,which is unique to human beings. See: VS Ramachandran: The neurons that shaped civilization 

      How important is the environment in that we might see the development of the conditions of "specific types of neurons" when the color can dictate the type of neuron developed? Can we say that the color(emotion) is an emotive state that we might indeed create in the type of consciousness with which we meet the world. A consciousness that that sets the trains of thought given the reality of our own perceptions. Or,  perpetuated thought processes unravelled in a world of our own illusions?

    In a nutshell, what Karim showed was that each time a memory is used, it has to be restored as a new memory in order to be accessible later. The old memory is either not there or is inaccessible. In short, your memory about something is only as good as your last memory about it. Joseph LeDoux

    Psychology professor Karim Nader is helping sufferers of post-traumatic stress disorder lessen debilitating symptoms—and in some cases, regain a normal life.Owen Egan See also: The Trauma Tamer See Also: Brain Storming

    IC: Why is this research so important?

    Karim Nader: There are a lot of implications. All psychopathological disorders, such as PTSD, epilepsy, obsessive compulsive disorders, or addiction—all these things have to do with your brain getting rewired in a way that is malfunctioning. Theoretically, we may be able to treat a lot of these psychopathologies. If you could block the re-storage of the circuit that causes the obsessive compulsion, then you might be able to reset a person to a level where they aren’t so obsessive. Or perhaps you can reset the circuit that has undergone epilepsy repeatedly so that you can increase the threshold for seizures. And there is some killer data showing that it’s possible to block the reconsolidation of drug cravings.

    The other reason why I think it is so striking is that it is so contrary to what has been the accepted view of memory for so long in the mainstream. My research caused everybody in the field to stop, turn around and go, “Whoa, where’d that come from?” Nobody’s really working on this issue, and the only reason I came up with this is because I wasn’t trained in memory. [Nader was originally researching fear.] It really caused a fundamental reconceptualization of a very basic and dogmatic field in neuroscience, which is very exciting. It is the first time in 100 years that people are starting to come up with new models of memory at the physiological level.

    Part of the understanding for me is that in creating this environment for neural development the retention of memory has to have some emotive basis which arises from the ancient part of our brain in that it is associated with the heart response.

     Savas Dimopoulos

    Here’s an analogy to understand this: imagine that our universe is a two-dimensional pool table, which you look down on from the third spatial dimension. When the billiard balls collide on the table, they scatter into new trajectories across the surface. But we also hear the click of sound as they impact: that’s collision energy being radiated into a third dimension above and beyond the surface. In this picture, the billiard balls are like protons and neutrons, and the sound wave behaves like the graviton. See: The Sound Of Billiard Balls
    While these physiological processes are going on in our bodies the chemical responses of emotion trigger manifestations in the world outside of our bodies. Let us say consciousness exists "at the periphery of our bodies." What measure then to assess the realization that such manifestations internally are in the control of our manipulations of living experience? Are we then not caught in the throes of and are we not  machine like to think such associations could have ever been produced in a robot like being manufactured?

    Of course this is a fictional representation above of what may resound within and according to the experiences we may have? The question is then how are memories retained? How do memories transmit through out our endocrinology system the nature of our experiences so that we see consciousness as a form of the expression through which we color our world?

    Monday, May 28, 2012

    Embodied Cognition and iCub

    An iCub robot mounted on a supporting frame. The robot is 104 cm high and weighs around 22 kg
    An iCub is a 1 metre high humanoid robot testbed for research into human cognition and artificial intelligence.

    Systems that perceive, understand and act

    It was designed by the RobotCub Consortium, of several European universities and is now supported by other projects such as ITALK.[1] The robot is open-source, with the hardware design, software and documentation all released under the GPL license. The name is a partial acronym, cub standing for Cognitive Universal Body.[2] Initial funding for the project was 8.5 million from Unit E5 – Cognitive Systems and Robotics – of the European Commission's Seventh Framework Programme, and this ran for six years from 1 September 2004 until 1 September 2010.[2]

    The motivation behind the strongly humanoid design is the embodied cognition hypothesis, that human-like manipulation plays a vital role in the development of human cognition. A baby learns many cognitive skills by interacting with its environment and other humans using its limbs and senses, and consequently its internal model of the world is largely determined by the form of the human body. The robot was designed to test this hypothesis by allowing cognitive learning scenarios to be acted out by an accurate reproduction of the perceptual system and articulation of a small child so that it could interact with the world in the same way that such a child does.[3]

     See Also: RoboCub

    In philosophy, the embodied mind thesis holds that the nature of the human mind is largely determined by the form of the human body. Philosophers, psychologists, cognitive scientists and artificial intelligence researchers who study embodied cognition and the embodied mind argue that all aspects of cognition are shaped by aspects of the body. The aspects of cognition include high level mental constructs (such as concepts and categories) and human performance on various cognitive tasks (such as reasoning or judgement). The aspects of the body include the motor system, the perceptual system, the body's interactions with the environment (situatedness) and the ontological assumptions about the world that are built into the body and the brain.

    The embodied mind thesis is opposed to other theories of cognition such as cognitivism, computationalism and Cartesian dualism.[1] The idea has roots in Kant and 20th century continental philosophy (such as Merleau-Ponty). The modern version depends on insights drawn from recent research in psychology, linguistics, cognitive science, artificial intelligence, robotics and neurobiology.

    Embodied cognition is a topic of research in social and cognitive psychology, covering issues such as social interaction and decision-making.[2] Embodied cognition reflects the argument that the motor system influences our cognition, just as the mind influences bodily actions. For example, when participants hold a pencil in their teeth engaging the muscles of a smile, they comprehend pleasant sentences faster than unpleasant ones.[3] And it works in reverse: holding a pencil in their teeth to engage the muscles of a frown increases the time it takes to comprehend pleasant sentences.[3]

    George Lakoff (a cognitive scientist and linguist) and his collaborators (including Mark Johnson, Mark Turner, and Rafael E. Núñez) have written a series of books promoting and expanding the thesis based on discoveries in cognitive science, such as conceptual metaphor and image schema.[4]
    Robotics researchers such as Rodney Brooks, Hans Moravec and Rolf Pfeifer have argued that true artificial intelligence can only be achieved by machines that have sensory and motor skills and are connected to the world through a body.[5] The insights of these robotics researchers have in turn inspired philosophers like Andy Clark and Horst Hendriks-Jansen.[6]

    Neuroscientists Gerald Edelman, António Damásio and others have outlined the connection between the body, individual structures in the brain and aspects of the mind such as consciousness, emotion, self-awareness and will.[7] Biology has also inspired Gregory Bateson, Humberto Maturana, Francisco Varela, Eleanor Rosch and Evan Thompson to develop a closely related version of the idea, which they call enactivism.[8] The motor theory of speech perception proposed by Alvin Liberman and colleagues at the Haskins Laboratories argues that the identification of words is embodied in perception of the bodily movements by which spoken words are made.[9][10][11][12][13]

    The mind-body problem is a philosophical problem arising in the fields of metaphysics and philosophy of mind.[2] The problem arises because mental phenomena arguably differ, qualitatively or substantially, from the physical body on which they apparently depend. There are a few major theories on the resolution of the problem. Dualism is the theory that the mind and body are two distinct substances,[2] and monism is the theory that they are, in reality, just one substance. Monist materialists (also called physicalists) take the view that they are both matter, and monist idealists take the view that they are both in the mind. Neutral monists take the view that both are reducible to a third, neutral substance.

    The problem was identified by René Descartes in the sense known by the modern Western world, although the issue was also addressed by pre-Aristotelian philosophers,[3] in Avicennian philosophy,[4] and in earlier Asian traditions.

    A dualist view of reality may lead one to consider the corporeal as little valued[3] and trivial. The rejection of the mind–body dichotomy is found in French Structuralism, and is a position that generally characterized post-war French philosophy.[5] The absence of an empirically identifiable meeting point between the non-physical mind and its physical extension has proven problematic to dualism and many modern philosophers of mind maintain that the mind is not something separate from the body.[6] These approaches have been particularly influential in the sciences, particularly in the fields of sociobiology, computer science, evolutionary psychology and the various neurosciences.[7][8][9][10]

    Wednesday, May 23, 2012


    Hypercomputation or super-Turing computation refers to models of computation that go beyond, or are incomparable to, Turing computability. This includes various hypothetical methods for the computation of non-Turing-computable functions, following super-recursive algorithms (see also supertask). The term "super-Turing computation" appeared in a 1995 Science paper by Hava Siegelmann. The term "hypercomputation" was introduced in 1999 by Jack Copeland and Diane Proudfoot.[1]

    The terms are not quite synonymous: "super-Turing computation" usually implies that the proposed model is supposed to be physically realizable, while "hypercomputation" does not.

    Technical arguments against the physical realizability of hypercomputations have been presented.





    A computational model going beyond Turing machines was introduced by Alan Turing in his 1938 PhD dissertation Systems of Logic Based on Ordinals.[2] This paper investigated mathematical systems in which an oracle was available, which could compute a single arbitrary (non-recursive) function from naturals to naturals. He used this device to prove that even in those more powerful systems, undecidability is still present. Turing's oracle machines are strictly mathematical abstractions, and are not physically realizable.[3]

    Hypercomputation and the Church–Turing thesis


    The Church–Turing thesis states that any function that is algorithmically computable can be computed by a Turing machine. Hypercomputers compute functions that a Turing machine cannot, hence, not computable in the Church-Turing sense.
    An example of a problem a Turing machine cannot solve is the halting problem. A Turing machine cannot decide if an arbitrary program halts or runs forever. Some proposed hypercomputers can simulate the program for an infinite number of steps and tell the user whether or not the program halted.

    Hypercomputer proposals


    • A Turing machine that can complete infinitely many steps. Simply being able to run for an unbounded number of steps does not suffice. One mathematical model is the Zeno machine (inspired by Zeno's paradox). The Zeno machine performs its first computation step in (say) 1 minute, the second step in ½ minute, the third step in ¼ minute, etc. By summing 1+½+¼+... (a geometric series) we see that the machine performs infinitely many steps in a total of 2 minutes. However, some[who?] people claim that, following the reasoning from Zeno's paradox, Zeno machines are not just physically impossible, but logically impossible.[4]
    • Turing's original oracle machines, defined by Turing in 1939.
    • In mid 1960s, E Mark Gold and Hilary Putnam independently proposed models of inductive inference (the "limiting recursive functionals"[5] and "trial-and-error predicates",[6] respectively). These models enable some nonrecursive sets of numbers or languages (including all recursively enumerable sets of languages) to be "learned in the limit"; whereas, by definition, only recursive sets of numbers or languages could be identified by a Turing machine. While the machine will stabilize to the correct answer on any learnable set in some finite time, it can only identify it as correct if it is recursive; otherwise, the correctness is established only by running the machine forever and noting that it never revises its answer. Putnam identified this new interpretation as the class of "empirical" predicates, stating: "if we always 'posit' that the most recently generated answer is correct, we will make a finite number of mistakes, but we will eventually get the correct answer. (Note, however, that even if we have gotten to the correct answer (the end of the finite sequence) we are never sure that we have the correct answer.)"[6] L. K. Schubert's 1974 paper "Iterated Limiting Recursion and the Program Minimization Problem" [7] studied the effects of iterating the limiting procedure; this allows any arithmetic predicate to be computed. Schubert wrote, "Intuitively, iterated limiting identification might be regarded as higher-order inductive inference performed collectively by an ever-growing community of lower order inductive inference machines."
    • A real computer (a sort of idealized analog computer) can perform hypercomputation[8] if physics admits general real variables (not just computable reals), and these are in some way "harnessable" for computation. This might require quite bizarre laws of physics (for example, a measurable physical constant with an oracular value, such as Chaitin's constant), and would at minimum require the ability to measure a real-valued physical value to arbitrary precision despite thermal noise and quantum effects.
    • A proposed technique known as fair nondeterminism or unbounded nondeterminism may allow the computation of noncomputable functions.[9] There is dispute in the literature over whether this technique is coherent, and whether it actually allows noncomputable functions to be "computed".
    • It seems natural that the possibility of time travel (existence of closed timelike curves (CTCs)) makes hypercomputation possible by itself. However, this is not so since a CTC does not provide (by itself) the unbounded amount of storage that an infinite computation would require. Nevertheless, there are spacetimes in which the CTC region can be used for relativistic hypercomputation.[10] Access to a CTC may allow the rapid solution to PSPACE-complete problems, a complexity class which while Turing-decidable is generally considered computationally intractable.[11][12]
    • According to a 1992 paper,[13] a computer operating in a Malament-Hogarth spacetime or in orbit around a rotating black hole[14] could theoretically perform non-Turing computations.[15][16]
    • In 1994, Hava Siegelmann proved that her new (1991) computational model, the Artificial Recurrent Neural Network (ARNN), could perform hypercomputation (using infinite precision real weights for the synapses). It is based on evolving an artificial neural network through a discrete, infinite succession of states.[17]
    • The infinite time Turing machine is a generalization of the Zeno machine, that can perform infinitely long computations whose steps are enumerated by potentially transfinite ordinal numbers. It models an otherwise-ordinary Turing machine for which non-halting computations are completed by entering a special state reserved for reaching a limit ordinal and to which the results of the preceding infinite computation are available.[18]
    • Jan van Leeuwen and Jiří Wiedermann wrote a 2000 paper[19] suggesting that the Internet should be modeled as a nonuniform computing system equipped with an advice function representing the ability of computers to be upgraded.
    • A symbol sequence is computable in the limit if there is a finite, possibly non-halting program on a universal Turing machine that incrementally outputs every symbol of the sequence. This includes the dyadic expansion of π and of every other computable real, but still excludes all noncomputable reals. Traditional Turing machines cannot edit their previous outputs; generalized Turing machines, as defined by Jürgen Schmidhuber, can. He defines the constructively describable symbol sequences as those that have a finite, non-halting program running on a generalized Turing machine, such that any output symbol eventually converges, that is, it does not change any more after some finite initial time interval. Due to limitations first exhibited by Kurt Gödel (1931), it may be impossible to predict the convergence time itself by a halting program, otherwise the halting problem could be solved. Schmidhuber ([20][21]) uses this approach to define the set of formally describable or constructively computable universes or constructive theories of everything. Generalized Turing machines can solve the halting problem by evaluating a Specker sequence.
    • A quantum mechanical system which somehow uses an infinite superposition of states to compute a non-computable function.[22] This is not possible using the standard qubit-model quantum computer, because it is proven that a regular quantum computer is PSPACE-reducible (a quantum computer running in polynomial time can be simulated by a classical computer running in polynomial space).[23]
    • In 1970, E.S. Santos defined a class of fuzzy logic-based "fuzzy algorithms" and "fuzzy Turing machines".[24] Subsequently, L. Biacino and G. Gerla showed that such a definition would allow the computation of nonrecursive languages; they suggested an alternative set of definitions without this difficulty.[25] Jiří Wiedermann analyzed the capabilities of Santos' original proposal in 2004.[26]
    • Dmytro Taranovsky has proposed a finitistic model of traditionally non-finitistic branches of analysis, built around a Turing machine equipped with a rapidly increasing function as its oracle. By this and more complicated models he was able to give an interpretation of second-order arithmetic.[27]


    Analysis of capabilities

    Many hypercomputation proposals amount to alternative ways to read an oracle or advice function embedded into an otherwise classical machine. Others allow access to some higher level of the arithmetic hierarchy. For example, supertasking Turing machines, under the usual assumptions, would be able to compute any predicate in the truth-table degree containing \Sigma^0_1 or \Pi^0_1. Limiting-recursion, by contrast, can compute any predicate or function in the corresponding Turing degree, which is known to be \Delta^0_2. Gold further showed that limiting partial recursion would allow the computation of precisely the \Sigma^0_2 predicates.
    Model Computable predicates Notes Refs
    supertasking tt(\Sigma^0_1, \Pi^0_1) dependent on outside observer [28]
    limiting/trial-and-error  \Delta^0_2
    iterated limiting (k times)  \Delta^0_{k+1}
    Blum-Shub-Smale machine
    incomparable with traditional computable real functions. [29]
    Malament-Hogarth spacetime HYP Dependent on spacetime structure [30]
    Analog recurrent neural network  \Delta^0_1[f] f is an advice function giving connection weights; size is bounded by runtime [31][32]
    Infinite time Turing machine  \ge T(\Sigma^1_1)
    Classical fuzzy Turing machine  \Sigma^0_1 \cup \Pi^0_1 For any computable t-norm [34]
    Increasing function oracle  \Delta^1_1 For the one-sequence model;  \Pi^1_1 are r.e. [27]


    Taxonomy of "super-recursive" computation methodologies

    Burgin has collected a list of what he calls "super-recursive algorithms" (from Burgin 2005: 132):
    • limiting recursive functions and limiting partial recursive functions (E. M. Gold[5])
    • trial and error predicates (Hilary Putnam[6])
    • inductive inference machines (Carl Herbert Smith)
    • inductive Turing machines (one of Burgin's own models)
    • limit Turing machines (another of Burgin's models)
    • trial-and-error machines (Ja. Hintikka and A. Mutanen [35])
    • general Turing machines (J. Schmidhuber[21])
    • Internet machines (van Leeuwen, J. and Wiedermann, J.[19])
    • evolutionary computers, which use DNA to produce the value of a function (Darko Roglic[36])
    • fuzzy computation (Jiří Wiedermann[26])
    • evolutionary Turing machines (Eugene Eberbach[37])
    In the same book, he presents also a list of "algorithmic schemes":



    Martin Davis, in his writings on hypercomputation [39] [40] refers to this subject as "a myth" and offers counter-arguments to the physical realizability of hypercomputation. As for its theory, he argues against the claims that this is a new field founded in 1990s. This point of view relies on the history of computability theory (degrees of unsolvability, computability over functions, real numbers and ordinals), as also mentioned above.
    Andrew Hodges wrote a critical commentary[41] on Copeland and Proudfoot's article[1].


    See also



    1. ^ a b Copeland and Proudfoot, Alan Turing's forgotten ideas in computer science. Scientific American, April 1999
    2. ^ Alan Turing, 1939, Systems of Logic Based on Ordinals Proceedings London Mathematical Society Volumes 2–45, Issue 1, pp. 161–228.[1]
    3. ^ "Let us suppose that we are supplied with some unspecified means of solving number-theoretic problems; a kind of oracle as it were. We shall not go any further into the nature of this oracle apart from saying that it cannot be a machine" (Undecidable p. 167, a reprint of Turing's paper Systems of Logic Based On Ordinals)
    4. ^ These models have been independently developed by many different authors, including Hermann Weyl (1927). Philosophie der Mathematik und Naturwissenschaft.; the model is discussed in Shagrir, O. (June 2004). "Super-tasks, accelerating Turing machines and uncomputability". Theor. Comput. Sci. 317, 1-3 317: 105–114. doi:10.1016/j.tcs.2003.12.007. and in Petrus H. Potgieter (July 2006). "Zeno machines and hypercomputation". Theoretical Computer Science 358 (1): 23–33. doi:10.1016/j.tcs.2005.11.040.
    5. ^ a b c E. M. Gold (1965). "Limiting Recursion". Journal of Symbolic Logic 30 (1): 28–48. doi:10.2307/2270580. JSTOR 2270580., E. Mark Gold (1967). "Language identification in the limit". Information and Control 10 (5): 447–474. doi:10.1016/S0019-9958(67)91165-5.
    6. ^ a b c Hilary Putnam (1965). "Trial and Error Predicates and the Solution to a Problem of Mostowksi". Journal of Symbolic Logic 30 (1): 49–57. doi:10.2307/2270581. JSTOR 2270581.
    7. ^ a b L. K. Schubert (July 1974). "Iterated Limiting Recursion and the Program Minimization Problem". Journal of the ACM 21 (3): 436–445. doi:10.1145/321832.321841.
    8. ^ Arnold Schönhage, "On the power of random access machines", in Proc. Intl. Colloquium on Automata, Languages, and Programming (ICALP), pages 520-529, 1979. Source of citation: Scott Aaronson, "NP-complete Problems and Physical Reality"[2] p. 12
    9. ^ Edith Spaan, Leen Torenvliet and Peter van Emde Boas (1989). "Nondeterminism, Fairness and a Fundamental Analogy". EATCS bulletin 37: 186–193.
    10. ^ Hajnal Andréka, István Németi and Gergely Székely, Closed Timelike Curves in Relativistic Computation, 2011.[3]
    11. ^ Todd A. Brun, Computers with closed timelike curves can solve hard problems, Found.Phys.Lett. 16 (2003) 245-253.[4]
    12. ^ S. Aaronson and J. Watrous. Closed Timelike Curves Make Quantum and Classical Computing Equivalent [5]
    13. ^ Hogarth, M., 1992, ‘Does General Relativity Allow an Observer to View an Eternity in a Finite Time?’, Foundations of Physics Letters, 5, 173–181.
    14. ^ István Neméti; Hajnal Andréka (2006). "Can General Relativistic Computers Break the Turing Barrier?". Logical Approaches to Computational Barriers, Second Conference on Computability in Europe, CiE 2006, Swansea, UK, June 30-July 5, 2006. Proceedings. Lecture Notes in Computer Science. 3988. Springer. doi:10.1007/11780342.
    15. ^ Etesi, G., and Nemeti, I., 2002 'Non-Turing computations via Malament-Hogarth space-times', Int.J.Theor.Phys. 41 (2002) 341–370, Non-Turing Computations via Malament-Hogarth Space-Times:.
    16. ^ Earman, J. and Norton, J., 1993, ‘Forever is a Day: Supertasks in Pitowsky and Malament-Hogarth Spacetimes’, Philosophy of Science, 5, 22–42.
    17. ^ Verifying Properties of Neural Networks p.6
    18. ^ Joel David Hamkins and Andy Lewis, Infinite time Turing machines, Journal of Symbolic Logic, 65(2):567-604, 2000.[6]
    19. ^ a b Jan van Leeuwen; Jiří Wiedermann (September 2000). "On Algorithms and Interaction". MFCS '00: Proceedings of the 25th International Symposium on Mathematical Foundations of Computer Science. Springer-Verlag.
    20. ^ Jürgen Schmidhuber (2000). "Algorithmic Theories of Everything". Sections in: Hierarchies of generalized Kolmogorov complexities and nonenumerable universal measures computable in the limit. International Journal of Foundations of Computer Science ():587-612 (). Section 6 in: the Speed Prior: A New Simplicity Measure Yielding Near-Optimal Computable Predictions. in J. Kivinen and R. H. Sloan, editors, Proceedings of the 15th Annual Conference on Computational Learning Theory (COLT ), Sydney, Australia, Lecture Notes in Artificial Intelligence, pages 216--228. Springer, . 13 (4): 1–5. arXiv:quant-ph/0011122.
    21. ^ a b J. Schmidhuber (2002). "Hierarchies of generalized Kolmogorov complexities and nonenumerable universal measures computable in the limit". International Journal of Foundations of Computer Science 13 (4): 587–612. doi:10.1142/S0129054102001291.
    22. ^ There have been some claims to this effect; see Tien Kieu (2003). "Quantum Algorithm for the Hilbert's Tenth Problem". Int. J. Theor. Phys. 42 (7): 1461–1478. arXiv:quant-ph/0110136. doi:10.1023/A:1025780028846.. & the ensuing literature. Errors have been pointed out in Kieu's approach by Warren D. Smith in Three counterexamples refuting Kieu’s plan for “quantum adiabatic hypercomputation”; and some uncomputable quantum mechanical tasks
    23. ^ Bernstein and Vazirani, Quantum complexity theory, SIAM Journal on Computing, 26(5):1411-1473, 1997. [7]
    24. ^ Santos, Eugene S. (1970). "Fuzzy Algorithms". Information and Control 17 (4): 326–339. doi:10.1016/S0019-9958(70)80032-8.
    25. ^ Biacino, L.; Gerla, G. (2002). "Fuzzy logic, continuity and effectiveness". Archive for Mathematical Logic 41 (7): 643–667. doi:10.1007/s001530100128. ISSN 0933-5846.
    26. ^ a b Wiedermann, Jiří (2004). "Characterizing the super-Turing computing power and efficiency of classical fuzzy Turing machines". Theor. Comput. Sci. 317 (1–3): 61–69. doi:10.1016/j.tcs.2003.12.004.
    27. ^ a b Dmytro Taranovsky (July 17, 2005). "Finitism and Hypercomputation". Retrieved Apr 26, 2011.
    28. ^ Petrus H. Potgieter (July 2006). "Zeno machines and hypercomputation". Theoretical Computer Science 358 (1): 23–33. doi:10.1016/j.tcs.2005.11.040.
    29. ^ Lenore Blum, Felipe Cucker, Michael Shub, and Stephen Smale. Complexity and Real Computation. ISBN 0-387-98281-7.
    30. ^ P. D. Welch (10-Sept-2006). The extent of computation in Malament-Hogarth spacetimes. arXiv:gr-qc/0609035.
    31. ^ Hava Siegelmann (April 1995). "Computation Beyond the Turing Limit". Science 268 (5210): 545–548. doi:10.1126/science.268.5210.545. PMID 17756722.
    32. ^ Hava Siegelmann; Eduardo Sontag (1994). "Analog Computation via Neural Networks". Theoretical Computer Science 131 (2): 331–360. doi:10.1016/0304-3975(94)90178-3.
    33. ^ Joel David Hamkins; Andy Lewis (2000). "Infinite Time Turing machines". Journal of Symbolic Logic 65 (2): 567=604.
    34. ^ Jiří Wiedermann (June 4, 2004). "Characterizing the super-Turing computing power and efficiency of classical fuzzy Turing machines". Theoretical Computer Science (Elsevier Science Publishers Ltd. Essex, UK) 317 (1–3).
    35. ^ Hintikka, Ja; Mutanen, A. (1998). "An Alternative Concept of Computability". Language, Truth, and Logic in Mathematics. Dordrecht. pp. 174–188.
    36. ^ Darko Roglic (24–Jul–2007). "The universal evolutionary computer based on super-recursive algorithms of evolvability". arXiv:0708.2686 [cs.NE].
    37. ^ Eugene Eberbach (2002). "On expressiveness of evolutionary computation: is EC algorithmic?". Computational Intelligence, WCCI 1: 564–569. doi:10.1109/CEC.2002.1006988.
    38. ^ Borodyanskii, Yu M; Burgin, M. S. (1994). "Operations and compositions in transrecursive operators". Cybernetics and Systems Analysis 30 (4): 473–478. doi:10.1007/BF02366556.
    39. ^ Davis, Martin, Why there is no such discipline as hypercomputation, Applied Mathematics and Computation, Volume 178, Issue 1, 1 July 2006, Pages 4–7, Special Issue on Hypercomputation
    40. ^ Davis, Martin (2004). "The Myth of Hypercomputation". Alan Turing: Life and Legacy of a Great Thinker. Springer.
    41. ^ Andrew Hodges (retrieved 23 September 2011). "The Professors and the Brainstorms". The Alan Turing Home Page.


    Further reading


    External links