Showing posts with label Quantum Computers. Show all posts
Showing posts with label Quantum Computers. Show all posts

Wednesday, February 01, 2017

Quantum Technologies

As We Enter a New Quantum Era: Michele Mosca Public Lecture 

Published on Oct 6, 2016
In his public lecture at Perimeter Institute on Oct. 5, 2015, Michele Mosca (Institute for Quantum Computing, Perimeter Institute) explored quantum technologies – those that already exist, and those yet to come – and how they will affect our lives.

Sunday, August 23, 2015

Yves Couder . Explains Wave/Particle Duality via Silicon Droplets [Through the Wormhole]

The modern double-slit experiment is a demonstration that light and matter can display characteristics of both classically defined waves and particles; moreover, it displays the fundamentally probabilistic nature of quantum mechanical phenomena. This experiment was performed originally by Thomas Young in 1801 (well before quantum mechanics) simply to demonstrate the wave theory of light and is sometimes referred to as Young's experiment.[1] The experiment belongs to a general class of "double path" experiments, in which a wave is split into two separate waves that later combine into a single wave. Changes in the path lengths of both waves result in a phase shift, creating an interference pattern. Another version is the Mach–Zehnder interferometer, which splits the beam with a mirror.Double-slit experiment

To some researchers, the experiments suggest that quantum objects are as definite as droplets, and that they too are guided by pilot waves — in this case, fluid-like undulations in space and time. These arguments have injected new life into a deterministic (as opposed to probabilistic) theory of the microscopic world first proposed, and rejected, at the birth of quantum mechanics. See:
Have We Been Interpreting Quantum Mechanics Wrong This Whole Time?


The Binary Pulsar PSR 1913+16:

In youtube example video given, I must say if you have ever seen Taylor and Hulse's binary system, I couldn't help but see some relation. Such rotation, would cause gravitational wave that seems to hold the droplet in position for examination......but the gravitational wave production, is an affect of this rotation so I am puzzled by this.

Natalie Wolchover is pretty good at her job, and I think drew attention to the idea of a Bohemian mechanics/Pilot wave theory. This, as an alteration of choice of quantum mechanics it became clear, how interpretation was pervasive at the time between these two groups, as a point of view. Not saying this is the case, but as I read I see the division between the scientists as to how an interpretation arose between them, some choose one way and others, another. And still they did not discard the world of the two groups but leaned specifically to one side over another.

As de Broglie explained that day to Bohr, Albert Einstein, Erwin Schrödinger, Werner Heisenberg and two dozen other celebrated physicists, pilot-wave theory made all the same predictions as the probabilistic formulation of quantum mechanics (which wouldn’t be referred to as the “Copenhagen” interpretation until the 1950s), but without the ghostliness or mysterious collapse. -Have We Been Interpreting Quantum Mechanics Wrong This Whole Time?
I am looking at the experiment itself as illustrated in my link to youtube video of respective scientists given the relation and analogy used. This is to see the aspect of their relation to something current in our understanding "as observation," and something much more to it as particle and wave together. Still trying to understand the analogy. In the experiment, what leads the way, the wave, or the particle/droplet? The "wave function" guides the particle/droplet, yes? Why of course, it is called pilot-wave theory.

Before the experiment begins then, you know the particles state "as a wave function," and given that this is already known, "the particle" rides the wave function, is exemplary of the nature of the perspective in the first place, as to what is already known. Hmmmm....sounds a little confusing to me as I was seeing the waves in the experiment, but given that such state of coalesce exists when experiment is done, raises questions for me about the shaker as a necessity?

 So cosmological you are looking to the past? You look up at the night sky and when were all these messages received in the classical sense but to be an observer of what happened a long time ago. You recognize the pathway as a wave function already before the experimenter of the double slit even begins. It has a trajectory path already given as the wave function is known with regard to A to B. These are not probabilities then, if recognized as potential of the wave function as already defining a pathway.

The pathway expressed as the pattern, had to already been established as a causative event in the evolution in the recognition of a collision course regarding any synchronized event located in the quantum world, as a wave function pattern. You are dealing with a Bohemian interpretation here.


 On the flip side, I see spintronics, as a wave function giving consideration to the y direction. It is a analogy that comes to mind when I think of the fluid. Whether right or not, I see an association.

The idea, as a wave function is seen in regard to this chain as an illustration of the complexity of the fluid surface

To go further then,

Known as a major facet in the study of quantum hydrodynamics and macroscopic quantum phenomena, the superfluidity effect was discovered by Pyotr Kapitsa[1] and John F. Allen, and Don Misener[2] in 1937. It has since been described through phenomenological and microscopic theories. The formation of the superfluid is known to be related to the formation of a Bose–Einstein condensate. This is made obvious by the fact that superfluidity occurs in liquid helium-4 at far higher temperatures than it does in helium-3. Each atom of helium-4 is a boson particle, by virtue of its zero spin.
Bold and underline added for emphasis

A Bose–Einstein condensate (BEC) is a state of matter of a dilute gas of bosons cooled to temperatures very close to absolute zero (that is, very near 0 K or −273.15 °C). Under such conditions, a large fraction of bosons occupy the lowest quantum state, at which point macroscopic quantum phenomena become apparent.
So fast forward to the idealistic perception of the analog by comparison in today's use against a backdrop of the theories and what do we see?

Nevertheless, they have proven useful in exploring a wide range of questions in fundamental physics, and the years since the initial discoveries by the JILA and MIT groups have seen an increase in experimental and theoretical activity. Examples include experiments that have demonstrated interference between condensates due to wave–particle duality,[25] the study of superfluidity and quantized vortices, the creation of bright matter wave solitons from Bose condensates confined to one dimension, and the slowing of light pulses to very low speeds using electromagnetically induced transparency.[26] Vortices in Bose–Einstein condensates are also currently the subject of analogue gravity research, studying the possibility of modeling black holes and their related phenomena in such environments in the laboratory. Experimenters have also realized "optical lattices", where the interference pattern from overlapping lasers provides a periodic potential. These have been used to explore the transition between a superfluid and a Mott insulator,[27] and may be useful in studying Bose–Einstein condensation in fewer than three dimensions, for example the Tonks–Girardeau gas. -

Friday, March 13, 2015

AI's State of Affairs

Human-level control through deep reinforcement learning
-The theory of reinforcement learning provides a normative account1, deeply rooted in psychological2 and neuroscientific3 perspectives on animal behaviour, of how agents may optimize their control of an environment. To use reinforcement learning successfully in situations approaching real-world complexity, however, agents are confronted with a difficult task: they must derive efficient representations of the environment from high-dimensional sensory inputs, and use these to generalize past experience to new situations. Remarkably, humans and other animals seem to solve this problem through a harmonious combination of reinforcement learning and hierarchical sensory processing systems4, 5, the former evidenced by a wealth of neural data revealing notable parallels between the phasic signals emitted by dopaminergic neurons and temporal difference reinforcement learning algorithms3. While reinforcement learning agents have achieved some successes in a variety of domains6, 7, 8, their applicability has previously been limited to domains in which useful features can be handcrafted, or to domains with fully observed, low-dimensional state spaces. Here we use recent advances in training deep neural networks9, 10, 11 to develop a novel artificial agent, termed a deep Q-network, that can learn successful policies directly from high-dimensional sensory inputs using end-to-end reinforcement learning. We tested this agent on the challenging domain of classic Atari 2600 games12. We demonstrate that the deep Q-network agent, receiving only the pixels and the game score as inputs, was able to surpass the performance of all previous algorithms and achieve a level comparable to that of a professional human games tester across a set of 49 games, using the same algorithm, network architecture and hyperparameters. This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.


This demo follows the description of the Deep Q Learning algorithm described in Playing Atari with Deep Reinforcement Learning, a paper from NIPS 2013 Deep Learning Workshop from DeepMind. The paper is a nice demo of a fairly standard (model-free) Reinforcement Learning algorithm (Q Learning) learning to play Atari games.

In this demo, instead of Atari games, we'll start out with something more simple: a 2D agent that has 9 eyes pointing in different angles ahead and every eye senses 3 values along its direction (up to a certain maximum visibility distance): distance to a wall, distance to a green thing, or distance to a red thing. The agent navigates by using one of 5 actions that turn it different angles. The red things are apples and the agent gets reward for eating them. The green things are poison and the agent gets negative reward for eating them. The training takes a few tens of minutes with current parameter settings.

Over time, the agent learns to avoid states that lead to states with low rewards, and picks actions that lead to better states instead.


Code for Human-Level Control through Deep Reinforcement Learning
Please click here to download the code associated with DeepMind's Nature Letter on "Human-Level Control through Deep Reinforcement Learning"

Monday, January 12, 2015

Rationalism vs Empiricism

The dispute between rationalism and empiricism concerns the extent to which we are dependent upon sense experience in our effort to gain knowledge. Rationalists claim that there are significant ways in which our concepts and knowledge are gained independently of sense experience. Empiricists claim that sense experience is the ultimate source of all our concepts and knowledge.

Rationalists generally develop their view in two ways. First, they argue that there are cases where the content of our concepts or knowledge outstrips the information that sense experience can provide. Second, they construct accounts of how reason in some form or other provides that additional information about the world. Empiricists present complementary lines of thought. First, they develop accounts of how experience provides the information that rationalists cite, insofar as we have it in the first place. (Empiricists will at times opt for skepticism as an alternative to rationalism: if experience cannot provide the concepts or knowledge the rationalists cite, then we don't have them.) Second, empiricists attack the rationalists' accounts of how reason is a source of concepts or knowledge. SEE: Markie, Peter, "Rationalism vs. Empiricism", The Stanford Encyclopedia of Philosophy (Summer 2013 Edition), Edward N. Zalta (ed.),

 Long before I had come to understand this nature of rationalism there were already signs that such a journey was already being awakened. This was an understanding for me as to the nature of what could be gained from the ability to visualize beyond empirical nature of our journey into the sensible realm.

I guess in a such an awakening,  as to what we know,  there is the realization that what comes after helps to make that sense. So in a way one might like to see how rationalism together with Empiricism actually works. It is not in the sense that I might define one group of historical thinkers to contrast each other to say that one should excel over another, but to define how such a rationally sound person moves toward empiricism to understand the reality we created by experimentation and repeatability that empiricism enshrouds.

So this awakening while slow to materialize, comes from understanding something about the logic of the world and the definitions and architecture of that logical approach. To me in this day and age it has lead to some theory about which computational view could hold the idea about how we might see this reality. I am reticence to view this  as a form of that reality. It is for what holds me back is a self evident moment using deducted features of our reasoning,  which could move us to that moment of clarity.

 The Empiricism Thesis: We have no source of knowledge in S or for the concepts we use in S other than sense experience Empircism -

 Empirical fact would not be the basis of reality for Nick Bostrum's simulation argument for instance. I hope to explain why.

 The basis of this association(Rationalist, or, a Empiricist) is whether one gains by a deductive method, or, an inductive method. A sense experience tells us, science as we know it, is inductive. We must garner repeatable experiments to verify reality, a rationalist, by logic and reason of theory alone. Verification, comes afterward. This for a rationalist is a deductive something which can be true, can be "innate" before we accept the inductive method means,  that is it can be rationally ascertained. It is only after ward that such a process could be said to be true or false.

If the late character of our sources may incite us to doubt the authenticity of this tradition, there remains that, in its spirit, it is in no way out of character, as can be seen by reading or rereading what Plato says about the sciences fit for the formation of philosophers in book VII of the Republic, and especially about geometry at Republic, VII, 526c8-527c11. We should only keep in mind that, for Plato, geometry, as well as all other mathematical sciences, is not an end in itself, but only a prerequisite meant to test and develop the power of abstraction in the student, that is, his ability to go beyond the level of sensible experience which keeps us within the "visible" realm, that of the material world, all the way to the pure intelligible. And geometry, as can be seen through the experiment with the slave boy in the Meno (Meno, 80d1-86d2), can also make us discover the existence of truths (that of a theorem of geometry such as, in the case of the Meno, the one about doubling a square) that may be said to be "transcendant" in that they don't depend upon what we may think about them, but have to be accepted by any reasonable being, which should lead us into wondering whether such transcendant truths might not exist as well in other areas, such as ethics and matters relating to men's ultimate happiness, whether we may be able to "demonstrate" them or not.See: Frequently Asked Questions about Plato by Bernard SUZANNE
When you examine deeply the very nature of your journey, then, you come to realize what is hidden underneath "experience." So while being an empiricist, it is necessary to know that such a joining with the rationalist correlates with the reasoned only after the mentioned experience. These are "corollary experiences," which serve to identify that which had been identified long before the sensible world had been made known.

Paradoxically, it was Einstein who reluctantly introduced the notion of spontaneous events, which might after all be the root of Bellʼs theorem. The lesson for the future could, however, be that we should build the notion of locality on the operationally clear 'no-signalling' condition—the impossibility of transferring information faster than light. After all, this is all that theory of relativity requires.

The moral of the story is that Bellʼs theorem, in all its forms, tells us not what quantum mechanics is, but what quantum mechanics is not.
Quantum non-locality—it ainʼt necessarily so... -

Empiricism then is to validate as a corollary that which had been cognate(maybe poor choice of word here but instead should use cognition). This does not mean you stop the process, but to extend the visionary possibility of that which can be cognitive....peering into the very nature of reality. Becomes the " we should build the notion of locality on the operationally clear 'no-signalling' condition."

Here the question of entanglement raises it's head to ask what is really being trasmitted as the corrallary of information,  as a direct physical connection in a computational system. In a quantum gravity scheme what is exchanged as a spin 2 graviton we might examine in the corollary of this no signalling condition but as a direct understanding of gravitational signalling.?

Such an examination reveals the Innate process with which we may already know "some thing,"  is awakened by moving into the world of science. While we consider such computational reality in context of a ontological question,  then,  such a journey may be represented as the geometry of the being which reveals a deeper question about the make-up of that reality.

Affective Field Theory of Emotion regarding sensory development may aid in the journey for understanding the place with which "the idea/form in expression arises," and that which is reasoned, beyond the related abstractions of "such a beginning," by becoming the ideal, in the empiricist world.

Sunday, December 21, 2014

The Architecture of Matter?


I cannot say for certain and I speculate. Bucky balls then bring to mind this architectural structure? Let me give you an example of a recent discovery. I have to wonder if Bucky was a Platonist at heart......with grand ideas? Perhaps you recognze some Platonist idea about perfection as if mathematically a Tegmarkan might have found some truth? Some absolute truth? Perhaps a Penrose truth (Quasicrystal and Information)?

 Aperiodic tilings serve as mathematical models for quasicrystals, physical solids that were discovered in 1982 by Dan Shechtman[3] who subsequently won the Nobel prize in 2011.[4] However, the specific local structure of these materials is still poorly understood .Aperiodic tilings -

 While one starts with a single point of entry......the whole process from another perspective is encapsulated. So you might work from the hydrogen spectrum as a start with the assumption, that this process in itself is enclosed.

 The future lies in encapsulating all electromagnetic forces under the auspice and enclosed within the understanding of gravity?

 240 E₈ polytope vertices using 5D orthographic_projection to 2D using 5-cube (Penteract) Petrie_polygon basis_vectors overlaid on electron diffraction pattern of an Icosahedron Zn-Mg-Ho Quasicrystal. E8_(mathematics) and Quasicrystals
At the same time one might understand the complexity of the issue?

 By now it is known theoretically that quantum angular momentum of any kind has a discrete spectrum, which is sometimes imprecisely expressed as "angular momentum is quantized".Stern–Gerlach experiment -


So possibly a Photon polarization principle inherent in a quantum description of the wave and such a principle inherent in the use of photosynthesis to describe a property not just of the capability of using sun light, but of understanding this principle biologically in human beings? I actually have a example of this use theoretically as a product. Maybe Elon Musk might like to use it?

Photonic molecules are a synthetic form of matter in which photons bind together to form "molecules". According to Mikhail Lukin, individual (massless) photons "interact with each other so strongly that they act as though they have mass". The effect is analogous to refraction. The light enters another medium, transferring part of its energy to the medium. Inside the medium, it exists as coupled light and matter, but it exits as light.[1]

While I would like to make it easy for you, I can only leave a title for your examination. "The Nobel Prize in Physics 1914 Max von Laue." Yes, but if it is understood that some correlate process can be understood from "a fundamental position," as to the architecture of matter, what would this light have to say about the component structuralism of the information we are missing?

The idea is not new. From a science fiction point of view, StarTrek had these units that when you were hungry or wanted a drink you would have this object materialize in a microwave type oven? Not the transporter.

So, you have this 3d printer accessing all information about the structure and access to the building blocks of all matter in energy, funneled through this replicator.


 When Bucky was waving his arm between the earth and the moon.....did he know about the three body problem, or how to look at the space between these bodies in another way. If people think this is not real, then you will have to tell those who use celestial mechanics that they are using their satellite trajectories all wrong.

 Ephemeralization, a term coined by R. Buckminster Fuller, is the ability of technological advancement to do "more and more with less and less until eventually you can do everything with nothing".[1] Fuller's vision was that ephemeralization will result in ever-increasing standards of living for an ever-growing population despite finite resources.

 Exactly. So it was not just "hand waving" Buckminister Fuller is alluding too, but some actual understanding to "more is less?" One applies the principle then? See? I am using your informational video to explain.

 ARTEMIS-P1 is the first spacecraft to navigate to and perform stationkeeping operations around the Earth-Moon L1 and L2 Lagrangian points. There are five Lagrangian points associated with the Earth-Moon system. ARTEMIS - The First Earth-Moon Libration Orbiter -

 To do more with less, it has to be understood that distance crossed needs minimum usage of fuel to project the satellite over a great distance. So they use "momentum" to swing satellites forward?

 This is a list of various types of equilibrium, the condition of a system in which all competing influences are balanced. List of types of equilibrium -

Saturday, October 18, 2014

Information Technology

Who are we? And what is our role in the universe? Information technology is radically changing not only how we deal with the world and make sense of it, or interact with each other, but also how we look at ourselves and understand our own existence and responsibilities. Philosophy Professor Floridi ( @Floridi ) will discuss such impact of information technology on our lives and on our self-understanding; he will take us along the Copernican revolution, the Darwinian revolution, the Freudian revolution right up to speed with the Turing revolution: a world of inforgs in a global environment ultimately made of information. Floridi will talk about expanding our ecological and ethical approach to both natural and man-made realities, in order to cope successfully with the new moral challenges posed by information technology. Ready for some philosophy? You bet!

Thursday, October 09, 2014

Majorana Fermions Discovered

A Majorana fermion (/məˈrɒnə ˈfɛərmɒn/[1]), also referred to as a Majorana particle, is a fermion that is its own antiparticle. They were hypothesized by Ettore Majorana in 1937. The term is sometimes used in opposition to a Dirac fermion, which describes fermions that are not their own antiparticles.
All of the Standard Model fermions except the neutrino behave as Dirac fermions at low energy (after electroweak symmetry breaking), but the nature of the neutrino is not settled and it may be either Dirac or Majorana. In condensed matter physics, Majorana fermions exist as quasiparticle excitations in superconductors and can be used to form Majorana bound states governed by non-abelian statistics.


Princeton University physicists built a powerful imaging device called a scanning-tunneling microscope and used it to capture an image of an elusive particle that behaves simultaneously like matter and antimatter. To avoid vibration, the microscope is cooled close to absolute zero and is suspended like a floating island in the floor above. The setup includes a 40-ton block of concrete, which is visible above the researchers. The research team includes, from left, graduate student Ilya Drozdov, postdoctoral researcher Sangjun Jeon, and professors of physics B. Andrei Bernevig and Ali Yazdani. (Photo by Denise Applewhite, Office of Communications)
Princeton University scientists have observed an exotic particle that behaves simultaneously like matter and antimatter, a feat of math and engineering that could eventually enable powerful computers based on quantum mechanics. Capping decades of searching, Princeton scientists observe elusive particle that is its own antiparticle.

 Majorana fermions are predicted to localize at the edge of a topological superconductor, a state of matter that can form when a ferromagnetic system is placed in proximity to a conventional superconductor with strong spin-orbit interaction. With the goal of realizing a one-dimensional topological superconductor, we have fabricated ferromagnetic iron (Fe) atomic chains on the surface of superconducting lead (Pb). Using high-resolution spectroscopic imaging techniques, we show that the onset of superconductivity, which gaps the electronic density of states in the bulk of the Fe chains, is accompanied by the appearance of zero energy end states. This spatially resolved signature provides strong evidence, corroborated by other observations, for the formation of a topological phase and edge-bound Majorana fermions in our atomic chains. Observation of Majorana fermions in ferromagnetic atomic chains on a superconductor



Saturday, November 30, 2013

Quantum Computing and Evolution?

The unique capability of quantum mechanics to evolve alternative possibilities in parallel is appealing and over the years a number of quantum algorithms have been developed offering great computational benefits. Systems coupled to the environment lose quantum coherence quickly and realization of schemes based on unitarity might be impossible. Recent discovery of room temperature quantum coherence in light harvesting complexes opens up new possibilities to borrow concepts from biology to use quantum effects for computational purposes. While it has been conjectured that light harvesting complexes such as the Fenna-Matthews-Olson (FMO) complex in the green sulfur bacteria performs an efficient quantum search similar to the quantum Grover's algorithm the analogy has yet to be established. See: Evolutionary Design in Biological Quantum Computing

The Bloch sphere is a representation of a qubit, the fundamental building block of quantum computers.

Quantum Light Harvesting Hints at Entirely New Form of Computing


Friday, December 07, 2012

Spintronics made easy

Spintronics (a neologism for "spin-based electronics"), also known as magnetoelectronics, is an emerging technology which exploits the quantum spin states of electrons as well as making use of their charge state. The electron spin itself is manifested as a two state magnetic energy system. The discovery of giant magnetoresistance in 1988 by Albert Fert et al. and Peter Grünberg et al. independently is considered as the birth of spintronics.

David Awschalom explains how the spin of the electron could be exploited in completely new types of electronic circuits If you are new to spintronics - or if you are wondering what all the excitement is about -- David Awschalom of the University of California, Santa Barbara provides a fantastic introduction to the field and explains how electron spin could be harnessed to create even denser computer memories and even quantum computers. In this video interview Awschalom also outlines the challenges that must be overcome before we see the next generation of spintronics devices and explains how he is addressing some of these in his lab.

TEDxCaltech - David Awschalom - Spintronics: Abandoning Perfection for the Quantum Age"

 See Also:

Monday, July 09, 2012

Majorana Particles in Computation

An artist's conception of the Majorana - a previously elusive subatomic particle whose existence has never been confirmed - until now. Dutch nano-scientists at the technological universities of Delft and Eindhoven, say they have found evidence of the particle. To find it, they devised miniscule circuitry around a microscopic wire in contact with a semiconductor and a superconductor. Lead researcher Leo Kouwenhoven. SOUNDBITE (English), NANOSCIENTIST OF DELFT UNIVERSITY, LEO KOUWENHOVEN, SAYING: "The samples that we use for measuring the Majorana fermions are really very small, you can see the holder of the sample, the sample is actually inside here and if you zoom in, you can actually see little wires and if you zoom in more, you see a very small nano-meter scale sample, where we detected one pair of Majoranas." When a magnetic field was applied along the the 'nanowire', electrons gathered together in synchrony as a Majorana particle. These subatomic particles could be used to encode information, turning them into data to be used inside a tiny, quantum computer. SOUNDBITE (English), NANOSCIENTIST OF DELFT UNIVERSITY, LEO KOUWENHOVEN, SAYING: "The goal is actually to develop those nano-scale devices into little circuits and actually make something like a quantum computer out of it, so they have special properties that could be very useful for computation, a particural kind of computation which we call quantum computation, which would replace actually our current computers by computers that are much more efficient than what we have now." The Majorana fermion's existence was first predicted 75 years ago by Italian Ettore Majorana. Probing the Majorana's particles could allow scientists to understand better the mysterious realm of quantum mechanics. Other groups working in solid state physics are thought to be close to making similar announcements....heralding a new era in super-powerful computer technology. Were he alive today Majorana may well be amazed at the sophisticated computer technology available to ordinary people in every day life. But compared to the revolution his particle may be about to spark, it will seem old fashioned in the not too distant future. Jim Drury, Reuters

Majorana fermion

Composition Elementary
Statistics Fermionic
Status Hypothetical
Antiparticle Itself
Theorised Ettore Majorana, 1937

A Majorana fermion is a fermion that is its own anti-particle. The term is sometimes used in opposition to Dirac fermion, which describes particles that differ from their antiparticles. It is common that bosons (such as the photon) are their own anti-particle. It is also quite common that fermions can be their own anti-particle, such as the fermionic quasiparticles in spin-singlet superconductors (where the quasiparticles/Majorana-fermions carry spin-1/2) and in superconductors with spin-orbital coupling, such as Ir, (where the quasiparticles/Majorana-fermions do not carry well defined spins).




The concept goes back to Ettore Majorana's 1937 suggestion[1] that neutral spin-1/2 particles can be described by a real wave equation (the Majorana equation), and would therefore be identical to their antiparticle (since the wave function of particle and antiparticle are related by complex conjugation).

The difference between Majorana fermions and Dirac fermions can be expressed mathematically in terms of the creation and annihilation operators of second quantization. The creation operator γj creates a fermion in quantum state j, while the annihilation operator γj annihilates it (or, equivalently, creates the corresponding antiparticle). For a Dirac fermion the operators γj and γj are distinct, while for a Majorana fermion they are identical.


Elementary particle

No elementary particle is known to be a Majorana fermion. However, the nature of the neutrino is not yet definitely settled; it might be a Majorana fermion or it might be a Dirac fermion. If it is a Majorana fermion, then neutrinoless double beta decay is possible; experiments are underway to search for this type of decay.
The hypothetical neutralino of supersymmetric models is a Majorana fermion.



In superconducting materials, Majorana fermions can emerge as (non-fundamental) quasiparticles.[2] (People also name protected zero-energy mode as Majorana fermion. The discussions in the rest of this article are actually about such protected zero-energy mode, which is quite different from the propagating particle introduced by Majorana.) The superconductor imposes electron hole symmetry on the quasiparticle excitations, relating the creation operator γ(E) at energy E to the annihilation operator γ(−E) at energy −E. At the Fermi level E=0, one has γ=γ so the excitation is a Majorana fermion. Since the Fermi level is in the middle of the superconducting gap, these are midgap states. A quantum vortex in certain superconductors or superfluids can trap midgap states, so this is one source of Majorana fermions.[3][4][5] Shockley states at the end points of superconducting wires or line defects are an alternative, purely electrical, source.[6] An altogether different source uses the fractional quantum Hall effect as a substitute for the superconductor.[7]

It was predicted that Majorana fermions in superconductors could be used as a building block for a (non-universal) topological quantum computer, in view of their non-Abelian anyonic statistics.[8]


Experiments in superconductivity

In 2008 Fu and Kane provided a groundbreaking development by theoretically predicting that Majorana fermions can appear at the interface between topological insulators and superconductors.[9][10] Many proposals of a similar spirit soon followed. An intense search to provide experimental evidence of Majorana fermions in superconductors[11][12] first produced some positive results in 2012.[13][14] A team from the Kavli Institute of Nanoscience at Delft University of Technology in the Netherlands reported an experiment involving indium antimonide nanowires connected to a circuit with a gold contact at one end and a slice of superconductor at the other. When exposed to a moderately strong magnetic field the apparatus showed a peak electrical conductance at zero voltage that is consistent with the formation of a pair of Majorana quasiparticles, one at either end of the region of the nanowire in contact with the superconductor.[15]

This experiment from Delft marks a possible verification of independent theoretical proposals from two groups[16][17] predicting the solid state manifestation of Majorana fermions in semiconducting wires.

It is important to note that the solid state manifestations of Majorana fermions are emergent low-energy localized modes of the system (quasiparticles) which are not fundamental new elementary particles as originally envisioned by Majorana (or as the neutrino would be if it turns out to be a Majorana fermion), but are effective linear combinations of half-electrons and half-holes which are topological anyonic objects obeying non-Abelian statistics.[8] The terminology "Majorana fermion" is thus not a good nomenclature for these solid state Majorana modes.



  1. ^ E. Majorana (1937). "Teoria simmetrica dell’elettrone e del positrone" (in Italian). Nuovo Cimento 14: 171. English translation.
  2. ^ F. Wilczek (2009). "Majorana returns". Nature Physics 5 (9): 614. Bibcode 2009NatPh...5..614W. DOI:10.1038/nphys1380.
  3. ^ N.B. Kopnin; Salomaa (1991). "Mutual friction in superfluid 3He: Effects of bound states in the vortex core". Physical Review B 44 (17): 9667. Bibcode 1991PhRvB..44.9667K. DOI:10.1103/PhysRevB.44.9667.
  4. ^ G.E. Volovik (1999). "Fermion zero modes on vortices in chiral superconductors". JETP Letters 70 (9): 609. Bibcode 1999JETPL..70..609V. DOI:10.1134/1.568223.
  5. ^ N. Read; Green (2000). "Paired states of fermions in two dimensions with breaking of parity and time-reversal symmetries and the fractional quantum Hall effect". Physical Review B 61 (15): 10267. Bibcode 2000PhRvB..6110267R. DOI:10.1103/PhysRevB.61.10267.
  6. ^ A. Yu. Kitaev (2001). "Unpaired Majorana fermions in quantum wires". Physics-Uspekhi (supplement) 44 (131): 131. Bibcode 2001PhyU...44..131K. DOI:10.1070/1063-7869/44/10S/S29.
  7. ^ G. Moore; Read (1991). "Nonabelions in the fractional quantum Hall effect". Nuclear Physics B 360 (2–3): 362. Bibcode 1991NuPhB.360..362M. DOI:10.1016/0550-3213(91)90407-O.
  8. ^ a b C. Nayak, S. Simon, A. Stern, M. Freedman, and S. Das Sarma (2008). "Non-Abelian anyons and topological quantum computation". Reviews of Modern Physics 80: 1083.
  9. ^ L. Fu; C. L. Kane (2008). "Superconducting Proximity Effect and Majorana Fermions at the Surface of a Topological Insulator". Physical Review Letters 10 (9): 096407. DOI:10.1103/PhysRevLett.100.096407.
  10. ^ L. Fu; C. L. Kane (2009). "Josephson current and noise at a superconductor/quantum-spin-Hall-insulator/superconductor junction". Physical Review B 79 (16): 161408. DOI:10.1103/PhysRevB.79.161408.
  11. ^ J. Alicea. New directions in the pursuit of Majorana fermions in solid state systems. arXiv:1202.1293.
  12. ^ C. W. J. Beenakker. Search for Majorana fermions in superconductors. arXiv:1112.1950.
  13. ^ E. S. Reich (28 February 2012). "Quest for quirky quantum particles may have struck gold". Nature News. DOI:10.1038/nature.2012.10124.
  14. ^ Jonathan Amos (13 April 2012). "Majorana particle glimpsed in lab". BBC News. Retrieved 15 April 2012.
  15. ^ V. Mourik; K. Zuo; S.M. Frolov; S.R. Plissard; E.P.A.M. Bakkers; L.P. Kouwenhoven (12 April 2012). "Signatures of Majorana fermions in hybrid superconductor-semiconductor nanowire devices". Science. arXiv:1204.2792. DOI:10.1126/science.1222360.
  16. ^ R. Lutchyn; J. Sau; S. Das Sarma (2010). "Majorana Fermions and a Topological Phase Transition in Semiconductor-Superconductor Heterostructures". Physical Review Letters 105 (7): 077001. Bibcode 2010PhRvL.105g7001L. DOI:10.1103/PhysRevLett.105.077001.
  17. ^ Y. Oreg; G. Refael; F. von Oppen (2010). "Helical Liquids and Majorana Bound States in Quantum Wires". Physical Review Letters 105 (17): 177002. DOI:10.1103/PhysRevLett.105.177002.

The Majorana experiment will search for neutrinoless double-beta decay of 76Ge. The discovery of this process would imply that the neutrino is a Majorana fermion (its own anti-particle) and allow a measurement of the effective Majorana neutrino mass. The first stage of the experiment, the Majorana Demonstrator, will consist of 60kg of germanium crystal detectors in three cryostats. Half of these will be made from natural germanium and half from germanium enriched in 76Ge. The goals of the Demonstrator are to test a claim for measurement of neutrinoless double beta-decay by Klapdor-Kleingrothaus et al. (2006), to demonstrate a low enough background to justify the construction of a larger tonne-scale experiment, and to demonstrate the scalability of the technology to the tonne scale. The experiment will be located at the 4850 ft level of the Sanford Laboratory in Lead, South Dakota. See: The Majorana neutrinoless double beta-decay experiment

See Also: Sounding Off on the Dark Matter Issue

Wednesday, May 23, 2012


Hypercomputation or super-Turing computation refers to models of computation that go beyond, or are incomparable to, Turing computability. This includes various hypothetical methods for the computation of non-Turing-computable functions, following super-recursive algorithms (see also supertask). The term "super-Turing computation" appeared in a 1995 Science paper by Hava Siegelmann. The term "hypercomputation" was introduced in 1999 by Jack Copeland and Diane Proudfoot.[1]

The terms are not quite synonymous: "super-Turing computation" usually implies that the proposed model is supposed to be physically realizable, while "hypercomputation" does not.

Technical arguments against the physical realizability of hypercomputations have been presented.





A computational model going beyond Turing machines was introduced by Alan Turing in his 1938 PhD dissertation Systems of Logic Based on Ordinals.[2] This paper investigated mathematical systems in which an oracle was available, which could compute a single arbitrary (non-recursive) function from naturals to naturals. He used this device to prove that even in those more powerful systems, undecidability is still present. Turing's oracle machines are strictly mathematical abstractions, and are not physically realizable.[3]

Hypercomputation and the Church–Turing thesis


The Church–Turing thesis states that any function that is algorithmically computable can be computed by a Turing machine. Hypercomputers compute functions that a Turing machine cannot, hence, not computable in the Church-Turing sense.
An example of a problem a Turing machine cannot solve is the halting problem. A Turing machine cannot decide if an arbitrary program halts or runs forever. Some proposed hypercomputers can simulate the program for an infinite number of steps and tell the user whether or not the program halted.

Hypercomputer proposals


  • A Turing machine that can complete infinitely many steps. Simply being able to run for an unbounded number of steps does not suffice. One mathematical model is the Zeno machine (inspired by Zeno's paradox). The Zeno machine performs its first computation step in (say) 1 minute, the second step in ½ minute, the third step in ¼ minute, etc. By summing 1+½+¼+... (a geometric series) we see that the machine performs infinitely many steps in a total of 2 minutes. However, some[who?] people claim that, following the reasoning from Zeno's paradox, Zeno machines are not just physically impossible, but logically impossible.[4]
  • Turing's original oracle machines, defined by Turing in 1939.
  • In mid 1960s, E Mark Gold and Hilary Putnam independently proposed models of inductive inference (the "limiting recursive functionals"[5] and "trial-and-error predicates",[6] respectively). These models enable some nonrecursive sets of numbers or languages (including all recursively enumerable sets of languages) to be "learned in the limit"; whereas, by definition, only recursive sets of numbers or languages could be identified by a Turing machine. While the machine will stabilize to the correct answer on any learnable set in some finite time, it can only identify it as correct if it is recursive; otherwise, the correctness is established only by running the machine forever and noting that it never revises its answer. Putnam identified this new interpretation as the class of "empirical" predicates, stating: "if we always 'posit' that the most recently generated answer is correct, we will make a finite number of mistakes, but we will eventually get the correct answer. (Note, however, that even if we have gotten to the correct answer (the end of the finite sequence) we are never sure that we have the correct answer.)"[6] L. K. Schubert's 1974 paper "Iterated Limiting Recursion and the Program Minimization Problem" [7] studied the effects of iterating the limiting procedure; this allows any arithmetic predicate to be computed. Schubert wrote, "Intuitively, iterated limiting identification might be regarded as higher-order inductive inference performed collectively by an ever-growing community of lower order inductive inference machines."
  • A real computer (a sort of idealized analog computer) can perform hypercomputation[8] if physics admits general real variables (not just computable reals), and these are in some way "harnessable" for computation. This might require quite bizarre laws of physics (for example, a measurable physical constant with an oracular value, such as Chaitin's constant), and would at minimum require the ability to measure a real-valued physical value to arbitrary precision despite thermal noise and quantum effects.
  • A proposed technique known as fair nondeterminism or unbounded nondeterminism may allow the computation of noncomputable functions.[9] There is dispute in the literature over whether this technique is coherent, and whether it actually allows noncomputable functions to be "computed".
  • It seems natural that the possibility of time travel (existence of closed timelike curves (CTCs)) makes hypercomputation possible by itself. However, this is not so since a CTC does not provide (by itself) the unbounded amount of storage that an infinite computation would require. Nevertheless, there are spacetimes in which the CTC region can be used for relativistic hypercomputation.[10] Access to a CTC may allow the rapid solution to PSPACE-complete problems, a complexity class which while Turing-decidable is generally considered computationally intractable.[11][12]
  • According to a 1992 paper,[13] a computer operating in a Malament-Hogarth spacetime or in orbit around a rotating black hole[14] could theoretically perform non-Turing computations.[15][16]
  • In 1994, Hava Siegelmann proved that her new (1991) computational model, the Artificial Recurrent Neural Network (ARNN), could perform hypercomputation (using infinite precision real weights for the synapses). It is based on evolving an artificial neural network through a discrete, infinite succession of states.[17]
  • The infinite time Turing machine is a generalization of the Zeno machine, that can perform infinitely long computations whose steps are enumerated by potentially transfinite ordinal numbers. It models an otherwise-ordinary Turing machine for which non-halting computations are completed by entering a special state reserved for reaching a limit ordinal and to which the results of the preceding infinite computation are available.[18]
  • Jan van Leeuwen and Jiří Wiedermann wrote a 2000 paper[19] suggesting that the Internet should be modeled as a nonuniform computing system equipped with an advice function representing the ability of computers to be upgraded.
  • A symbol sequence is computable in the limit if there is a finite, possibly non-halting program on a universal Turing machine that incrementally outputs every symbol of the sequence. This includes the dyadic expansion of π and of every other computable real, but still excludes all noncomputable reals. Traditional Turing machines cannot edit their previous outputs; generalized Turing machines, as defined by Jürgen Schmidhuber, can. He defines the constructively describable symbol sequences as those that have a finite, non-halting program running on a generalized Turing machine, such that any output symbol eventually converges, that is, it does not change any more after some finite initial time interval. Due to limitations first exhibited by Kurt Gödel (1931), it may be impossible to predict the convergence time itself by a halting program, otherwise the halting problem could be solved. Schmidhuber ([20][21]) uses this approach to define the set of formally describable or constructively computable universes or constructive theories of everything. Generalized Turing machines can solve the halting problem by evaluating a Specker sequence.
  • A quantum mechanical system which somehow uses an infinite superposition of states to compute a non-computable function.[22] This is not possible using the standard qubit-model quantum computer, because it is proven that a regular quantum computer is PSPACE-reducible (a quantum computer running in polynomial time can be simulated by a classical computer running in polynomial space).[23]
  • In 1970, E.S. Santos defined a class of fuzzy logic-based "fuzzy algorithms" and "fuzzy Turing machines".[24] Subsequently, L. Biacino and G. Gerla showed that such a definition would allow the computation of nonrecursive languages; they suggested an alternative set of definitions without this difficulty.[25] Jiří Wiedermann analyzed the capabilities of Santos' original proposal in 2004.[26]
  • Dmytro Taranovsky has proposed a finitistic model of traditionally non-finitistic branches of analysis, built around a Turing machine equipped with a rapidly increasing function as its oracle. By this and more complicated models he was able to give an interpretation of second-order arithmetic.[27]


Analysis of capabilities

Many hypercomputation proposals amount to alternative ways to read an oracle or advice function embedded into an otherwise classical machine. Others allow access to some higher level of the arithmetic hierarchy. For example, supertasking Turing machines, under the usual assumptions, would be able to compute any predicate in the truth-table degree containing \Sigma^0_1 or \Pi^0_1. Limiting-recursion, by contrast, can compute any predicate or function in the corresponding Turing degree, which is known to be \Delta^0_2. Gold further showed that limiting partial recursion would allow the computation of precisely the \Sigma^0_2 predicates.
Model Computable predicates Notes Refs
supertasking tt(\Sigma^0_1, \Pi^0_1) dependent on outside observer [28]
limiting/trial-and-error  \Delta^0_2
iterated limiting (k times)  \Delta^0_{k+1}
Blum-Shub-Smale machine
incomparable with traditional computable real functions. [29]
Malament-Hogarth spacetime HYP Dependent on spacetime structure [30]
Analog recurrent neural network  \Delta^0_1[f] f is an advice function giving connection weights; size is bounded by runtime [31][32]
Infinite time Turing machine  \ge T(\Sigma^1_1)
Classical fuzzy Turing machine  \Sigma^0_1 \cup \Pi^0_1 For any computable t-norm [34]
Increasing function oracle  \Delta^1_1 For the one-sequence model;  \Pi^1_1 are r.e. [27]


Taxonomy of "super-recursive" computation methodologies

Burgin has collected a list of what he calls "super-recursive algorithms" (from Burgin 2005: 132):
  • limiting recursive functions and limiting partial recursive functions (E. M. Gold[5])
  • trial and error predicates (Hilary Putnam[6])
  • inductive inference machines (Carl Herbert Smith)
  • inductive Turing machines (one of Burgin's own models)
  • limit Turing machines (another of Burgin's models)
  • trial-and-error machines (Ja. Hintikka and A. Mutanen [35])
  • general Turing machines (J. Schmidhuber[21])
  • Internet machines (van Leeuwen, J. and Wiedermann, J.[19])
  • evolutionary computers, which use DNA to produce the value of a function (Darko Roglic[36])
  • fuzzy computation (Jiří Wiedermann[26])
  • evolutionary Turing machines (Eugene Eberbach[37])
In the same book, he presents also a list of "algorithmic schemes":



Martin Davis, in his writings on hypercomputation [39] [40] refers to this subject as "a myth" and offers counter-arguments to the physical realizability of hypercomputation. As for its theory, he argues against the claims that this is a new field founded in 1990s. This point of view relies on the history of computability theory (degrees of unsolvability, computability over functions, real numbers and ordinals), as also mentioned above.
Andrew Hodges wrote a critical commentary[41] on Copeland and Proudfoot's article[1].


See also



  1. ^ a b Copeland and Proudfoot, Alan Turing's forgotten ideas in computer science. Scientific American, April 1999
  2. ^ Alan Turing, 1939, Systems of Logic Based on Ordinals Proceedings London Mathematical Society Volumes 2–45, Issue 1, pp. 161–228.[1]
  3. ^ "Let us suppose that we are supplied with some unspecified means of solving number-theoretic problems; a kind of oracle as it were. We shall not go any further into the nature of this oracle apart from saying that it cannot be a machine" (Undecidable p. 167, a reprint of Turing's paper Systems of Logic Based On Ordinals)
  4. ^ These models have been independently developed by many different authors, including Hermann Weyl (1927). Philosophie der Mathematik und Naturwissenschaft.; the model is discussed in Shagrir, O. (June 2004). "Super-tasks, accelerating Turing machines and uncomputability". Theor. Comput. Sci. 317, 1-3 317: 105–114. doi:10.1016/j.tcs.2003.12.007. and in Petrus H. Potgieter (July 2006). "Zeno machines and hypercomputation". Theoretical Computer Science 358 (1): 23–33. doi:10.1016/j.tcs.2005.11.040.
  5. ^ a b c E. M. Gold (1965). "Limiting Recursion". Journal of Symbolic Logic 30 (1): 28–48. doi:10.2307/2270580. JSTOR 2270580., E. Mark Gold (1967). "Language identification in the limit". Information and Control 10 (5): 447–474. doi:10.1016/S0019-9958(67)91165-5.
  6. ^ a b c Hilary Putnam (1965). "Trial and Error Predicates and the Solution to a Problem of Mostowksi". Journal of Symbolic Logic 30 (1): 49–57. doi:10.2307/2270581. JSTOR 2270581.
  7. ^ a b L. K. Schubert (July 1974). "Iterated Limiting Recursion and the Program Minimization Problem". Journal of the ACM 21 (3): 436–445. doi:10.1145/321832.321841.
  8. ^ Arnold Schönhage, "On the power of random access machines", in Proc. Intl. Colloquium on Automata, Languages, and Programming (ICALP), pages 520-529, 1979. Source of citation: Scott Aaronson, "NP-complete Problems and Physical Reality"[2] p. 12
  9. ^ Edith Spaan, Leen Torenvliet and Peter van Emde Boas (1989). "Nondeterminism, Fairness and a Fundamental Analogy". EATCS bulletin 37: 186–193.
  10. ^ Hajnal Andréka, István Németi and Gergely Székely, Closed Timelike Curves in Relativistic Computation, 2011.[3]
  11. ^ Todd A. Brun, Computers with closed timelike curves can solve hard problems, Found.Phys.Lett. 16 (2003) 245-253.[4]
  12. ^ S. Aaronson and J. Watrous. Closed Timelike Curves Make Quantum and Classical Computing Equivalent [5]
  13. ^ Hogarth, M., 1992, ‘Does General Relativity Allow an Observer to View an Eternity in a Finite Time?’, Foundations of Physics Letters, 5, 173–181.
  14. ^ István Neméti; Hajnal Andréka (2006). "Can General Relativistic Computers Break the Turing Barrier?". Logical Approaches to Computational Barriers, Second Conference on Computability in Europe, CiE 2006, Swansea, UK, June 30-July 5, 2006. Proceedings. Lecture Notes in Computer Science. 3988. Springer. doi:10.1007/11780342.
  15. ^ Etesi, G., and Nemeti, I., 2002 'Non-Turing computations via Malament-Hogarth space-times', Int.J.Theor.Phys. 41 (2002) 341–370, Non-Turing Computations via Malament-Hogarth Space-Times:.
  16. ^ Earman, J. and Norton, J., 1993, ‘Forever is a Day: Supertasks in Pitowsky and Malament-Hogarth Spacetimes’, Philosophy of Science, 5, 22–42.
  17. ^ Verifying Properties of Neural Networks p.6
  18. ^ Joel David Hamkins and Andy Lewis, Infinite time Turing machines, Journal of Symbolic Logic, 65(2):567-604, 2000.[6]
  19. ^ a b Jan van Leeuwen; Jiří Wiedermann (September 2000). "On Algorithms and Interaction". MFCS '00: Proceedings of the 25th International Symposium on Mathematical Foundations of Computer Science. Springer-Verlag.
  20. ^ Jürgen Schmidhuber (2000). "Algorithmic Theories of Everything". Sections in: Hierarchies of generalized Kolmogorov complexities and nonenumerable universal measures computable in the limit. International Journal of Foundations of Computer Science ():587-612 (). Section 6 in: the Speed Prior: A New Simplicity Measure Yielding Near-Optimal Computable Predictions. in J. Kivinen and R. H. Sloan, editors, Proceedings of the 15th Annual Conference on Computational Learning Theory (COLT ), Sydney, Australia, Lecture Notes in Artificial Intelligence, pages 216--228. Springer, . 13 (4): 1–5. arXiv:quant-ph/0011122.
  21. ^ a b J. Schmidhuber (2002). "Hierarchies of generalized Kolmogorov complexities and nonenumerable universal measures computable in the limit". International Journal of Foundations of Computer Science 13 (4): 587–612. doi:10.1142/S0129054102001291.
  22. ^ There have been some claims to this effect; see Tien Kieu (2003). "Quantum Algorithm for the Hilbert's Tenth Problem". Int. J. Theor. Phys. 42 (7): 1461–1478. arXiv:quant-ph/0110136. doi:10.1023/A:1025780028846.. & the ensuing literature. Errors have been pointed out in Kieu's approach by Warren D. Smith in Three counterexamples refuting Kieu’s plan for “quantum adiabatic hypercomputation”; and some uncomputable quantum mechanical tasks
  23. ^ Bernstein and Vazirani, Quantum complexity theory, SIAM Journal on Computing, 26(5):1411-1473, 1997. [7]
  24. ^ Santos, Eugene S. (1970). "Fuzzy Algorithms". Information and Control 17 (4): 326–339. doi:10.1016/S0019-9958(70)80032-8.
  25. ^ Biacino, L.; Gerla, G. (2002). "Fuzzy logic, continuity and effectiveness". Archive for Mathematical Logic 41 (7): 643–667. doi:10.1007/s001530100128. ISSN 0933-5846.
  26. ^ a b Wiedermann, Jiří (2004). "Characterizing the super-Turing computing power and efficiency of classical fuzzy Turing machines". Theor. Comput. Sci. 317 (1–3): 61–69. doi:10.1016/j.tcs.2003.12.004.
  27. ^ a b Dmytro Taranovsky (July 17, 2005). "Finitism and Hypercomputation". Retrieved Apr 26, 2011.
  28. ^ Petrus H. Potgieter (July 2006). "Zeno machines and hypercomputation". Theoretical Computer Science 358 (1): 23–33. doi:10.1016/j.tcs.2005.11.040.
  29. ^ Lenore Blum, Felipe Cucker, Michael Shub, and Stephen Smale. Complexity and Real Computation. ISBN 0-387-98281-7.
  30. ^ P. D. Welch (10-Sept-2006). The extent of computation in Malament-Hogarth spacetimes. arXiv:gr-qc/0609035.
  31. ^ Hava Siegelmann (April 1995). "Computation Beyond the Turing Limit". Science 268 (5210): 545–548. doi:10.1126/science.268.5210.545. PMID 17756722.
  32. ^ Hava Siegelmann; Eduardo Sontag (1994). "Analog Computation via Neural Networks". Theoretical Computer Science 131 (2): 331–360. doi:10.1016/0304-3975(94)90178-3.
  33. ^ Joel David Hamkins; Andy Lewis (2000). "Infinite Time Turing machines". Journal of Symbolic Logic 65 (2): 567=604.
  34. ^ Jiří Wiedermann (June 4, 2004). "Characterizing the super-Turing computing power and efficiency of classical fuzzy Turing machines". Theoretical Computer Science (Elsevier Science Publishers Ltd. Essex, UK) 317 (1–3).
  35. ^ Hintikka, Ja; Mutanen, A. (1998). "An Alternative Concept of Computability". Language, Truth, and Logic in Mathematics. Dordrecht. pp. 174–188.
  36. ^ Darko Roglic (24–Jul–2007). "The universal evolutionary computer based on super-recursive algorithms of evolvability". arXiv:0708.2686 [cs.NE].
  37. ^ Eugene Eberbach (2002). "On expressiveness of evolutionary computation: is EC algorithmic?". Computational Intelligence, WCCI 1: 564–569. doi:10.1109/CEC.2002.1006988.
  38. ^ Borodyanskii, Yu M; Burgin, M. S. (1994). "Operations and compositions in transrecursive operators". Cybernetics and Systems Analysis 30 (4): 473–478. doi:10.1007/BF02366556.
  39. ^ Davis, Martin, Why there is no such discipline as hypercomputation, Applied Mathematics and Computation, Volume 178, Issue 1, 1 July 2006, Pages 4–7, Special Issue on Hypercomputation
  40. ^ Davis, Martin (2004). "The Myth of Hypercomputation". Alan Turing: Life and Legacy of a Great Thinker. Springer.
  41. ^ Andrew Hodges (retrieved 23 September 2011). "The Professors and the Brainstorms". The Alan Turing Home Page.


Further reading


External links