Friday, October 09, 2009

Plato's Nightlight Mining Company is claiming Aristarchus Crater and Surrounding Region

So what is the legality of claiming land on the moon?

What regions would you like to claim if you had the opportunity to make such a claim? Imagine  Covered Wagons racing now as spaceships. Racing, to plant their posts too include, so many acres of land.

Stampede for Oklahoma's Unassigned Lands


Hubble Reveals Potential Titanium Oxide Deposits at Aristarchus and Schroter's Valley Rille

As a photocatalyst

Titanium dioxide, particularly in the anatase form, is a photocatalyst under ultraviolet light. Recently it has been found that titanium dioxide, when spiked with nitrogen ions, or doped with metal oxide like tungsten trioxide, is also a photocatalyst under visible and UV light. The strong oxidative potential of the positive holes oxidizes water to create hydroxyl radicals. It can also oxidize oxygen or organic materials directly. Titanium dioxide is thus added to paints, cements, windows, tiles, or other products for sterilizing, deodorizing and anti-fouling properties and is also used as a hydrolysis catalyst. It is also used in the Graetzel cell, a type of chemical solar cell.
The photocatalytic properties of titanium dioxide were discovered by Akira Fujishima in 1967[15] and published in 1972.[16] The process on the surface of the titanium dioxide was called the Honda-Fujishima effect.[15] Titanium dioxide has potential for use in energy production: as a photocatalyst, it can
  • carry out hydrolysis; i.e., break water into hydrogen and oxygen. Were the hydrogen collected, it could be used as a fuel. The efficiency of this process can be greatly improved by doping the oxide with carbon.[17].
  • Titanium dioxide can also produce electricity when in nanoparticle form. Research suggests that by using these nanoparticles to form the pixels of a screen, they generate electricity when transparent and under the influence of light. If subjected to electricity on the other hand, the nanoparticles blacken, forming the basic characteristics of a LCD screen. According to creator Zoran Radivojevic, Nokia has already built a functional 200-by-200-pixel monochromatic screen which is energetically self-sufficient.
In 1995 Fujishima and his group discovered the superhydrophilicity phenomenon for titanium dioxide coated glass exposed to sun light.[15] This resulted in the development of self-cleaning glass and anti-fogging coatings.
TiO2 incorporated into outdoor building materials, such as paving stones in noxer blocks or paints, can substantially reduce concentrations of airborne pollutants such as volatile organic compounds and nitrogen oxides.[18]
A photocatalytic cement that uses titanium dioxide as a primary component, produced by Italcementi Group, was included in Time's Top 50 Inventions of 2008.[19]

[edit] For wastewater remediation

TiO2 offers great potential as an industrial technology for detoxification or remediation of wastewater due to several factors.

  1. The process occurs under ambient conditions very slowly, direct UV light exposure increases the rate of reaction.

  2. The formation of photocyclized intermediate products, unlike direct photolysis
    techniques, is avoided.

  3. Oxidation of the substrates to CO2 is complete.

  4. The photocatalyst is inexpensive and has a high turnover.

  5. TiO2 can be supported on suitable reactor substrates.


The lunar south pole as it will appear on the night of impact. Photo Credit - NMSU / MSFC Tortugas Observatory

The impact site is crater Cabeus near the Moon's south pole. NASA is guiding the Lunar Crater Observation and Sensing Satellite ("LCROSS" for short) and its Centaur booster rocket into the crater's floor for a spectacular double-impact designed to "unearth" signs of lunar water. See:LCROSS Viewer's Guide

Image Above: The dark blue and purple areas at the moons poles indicate neutron emissions that are consistent with hydrogen-rich deposits covered by desiccated regolith. These hydrogen signatures are possible indications of water in the form of ice or hydrated minerals. Feldman et al., Science, 281, 1496, 1998. Click image to enlarge Credit: NASA

Just like on Earth, water will be a crucial resource on the moon. Transporting water and other goods from Earth to the moon’s surface is expensive. Finding natural resources, such as water ice, on the moon could help expedite lunar exploration. The LCROSS mission will search for water, using information learned from the Clementine and Lunar Prospector missions.

By going to the moon for extended periods of time, a new generation of explorers will learn how to work safely in a harsh environment. A lunar outpost is a stepping stone to future exploration of other bodies in our solar system. The moon also offers many clues about when the planets were formed.

See:Backreaction: Free Falling

See Also:
Jun 06, 2009
Oct 12, 2009
Jan 18, 2008

Mar 12, 2007


Saturday, October 03, 2009

Creating the Perfect Human Being or Maybe.....

..... a Frankenstein?:)

Seriously , there are defined differences in the human being versus AI Intelligence. I think people have a tendency to blurr the lines on machinery. This of course required some reading and wiki quotes herein help to orientate.

Of course the pictures in fiction development are closely related to the approach to development, while in some respects it represents to be more the development of the perfect human being

It seems there is a quest "to develop" human beings, not just robots.

Artificial Intelligence (AI) is the intelligence of machines and the branch of computer scienceintelligent agents,"[1] where an intelligent agent is a system that perceives its environment and takes actions which maximize its chances of success.[2] John McCarthy, who coined the term in 1956,[3][4] which aims to create it. Textbooks define the field as "the study and design of defines it as "the science and engineering of making intelligent machines."
The field was founded on the claim that a central property of humans, intelligence—the sapience of Homo sapiens—can be so precisely described that it can be simulated by a machine.[5] This raises philosophical issues about the nature of the mind and limits of scientific hubris, issues which have been addressed by myth, fiction and philosophy since antiquity.[6] Artificial intelligence has been the subject of breathtaking optimism,[7] has suffered stunning setbacks[8][9] and, today, has become an essential part of the technology industry, providing the heavy lifting for many of the most difficult problems in computer science.
AI research is highly technical and specialized, deeply divided into subfields that often fail to communicate with each other.[10] Subfields have grown up around particular institutions, the work of individual researchers, the solution of specific problems, longstanding differences of opinion about how AI should be done and the application of widely differing tools. The central problems of AI include such traits as reasoning, knowledge, planning, learning, communication, perception and the ability to move and manipulate objects.[11] General intelligence (or "strong AI") is still a long-term goal of (some) research.[12]

Rusty the Tin man

Lacking a heart.....

Knowledge representation

Knowledge representation[43] and knowledge engineering[44] are central to AI research. Many of the problems machines are expected to solve will require extensive knowledge about the world. Among the things that AI needs to represent are: objects, properties, categories and relations between objects;[45] situations, events, states and time;[46] causes and effects;[47][48] and many other, less well researched domains. A complete representation of "what exists" is an ontology[49] (borrowing a word from traditional philosophy), of which the most general are called upper ontologies. knowledge about knowledge (what we know about what other people know);
Among the most difficult problems in knowledge representation are:
Default reasoning and the qualification problem
Many of the things people know take the form of "working assumptions." For example, if a bird comes up in conversation, people typically picture an animal that is fist sized, sings, and flies. None of these things are true about all birds. John McCarthy identified this problem in 1969[50] as the qualification problem: for any commonsense rule that AI researchers care to represent, there tend to be a huge number of exceptions. Almost nothing is simply true or false in the way that abstract logic requires. AI research has explored a number of solutions to this problem.[51]
The breadth of commonsense knowledge
The number of atomic facts that the average person knows is astronomical. Research projects that attempt to build a complete knowledge base of commonsense knowledgeCyc) require enormous amounts of laborious ontological engineering — they must be built, by hand, one complicated concept at a time.[52] A major goal is to have the computer understand enough concepts to be able to learn by reading from sources like the internet, and thus be able to add to its own ontology. (e.g.,
The subsymbolic form of some commonsense knowledge
Much of what people know is not represented as "facts" or "statements" that they could actually say out loud. For example, a chess master will avoid a particular chess position because it "feels too exposed"[53] or an art critic can take one look at a statue and instantly realize that it is a fake.[54] These are intuitions or tendencies that are represented in the brain non-consciously and sub-symbolically.[55] Knowledge like this informs, supports and provides a context for symbolic, conscious knowledge. As with the related problem of sub-symbolic reasoning, it is hoped that situated AI or computational intelligence will provide ways to represent this kind of knowledge.[55]

Bicentennial man

....they wanted to embed robotic feature with emotive functions...

Social intelligence

Kismet, a robot with rudimentary social skills
Emotion and social skills[73] play two roles for an intelligent agent. First, it must be able to predict the actions of others, by understanding their motives and emotional states. (This involves elements of game theory, decision theory, as well as the ability to model human emotions and the perceptual skills to detect emotions.) Also, for good human-computer interaction, an intelligent machine also needs to display emotions. At the very least it must appear polite and sensitive to the humans it interacts with. At best, it should have normal emotions itself.

....finally, having the ability to dream:)

Integrating the approaches

Intelligent agent paradigm
An intelligent agent is a system that perceives its environment and takes actions which maximizes its chances of success. The simplest intelligent agents are programs that solve specific problems. The most complicated intelligent agents are rational, thinking humans.[92] The paradigm gives researchers license to study isolated problems and find solutions that are both verifiable and useful, without agreeing on one single approach. An agent that solves a specific problem can use any approach that works — some agents are symbolic and logical, some are sub-symbolic neural networks and others may use new approaches. The paradigm also gives researchers a common language to communicate with other fields—such as decision theory and economics—that also use concepts of abstract agents. The intelligent agent paradigm became widely accepted during the 1990s.[93]

Agent architectures and cognitive architectures

Researchers have designed systems to build intelligent systems out of interacting intelligent agents in a multi-agent system.[94] A system with both symbolic and sub-symbolic components is a hybrid intelligent system, and the study of such systems is artificial intelligence systems integration. A hierarchical control system provides a bridge between sub-symbolic AI at its lowest, reactive levels and traditional symbolic AI at its highest levels, where relaxed time constraints permit planning and world modelling.[95] Rodney Brooks' subsumption architecture was an early proposal for such a hierarchical system.
So to me there is an understanding that needs to remain consistent in our views as one moves forward here to see that what is create is not really the human being that we are, but a manifestation of. I think people tend to "loose perspective" on human intelligence versus A.I. So that the issue then is to note these differences? This distinction to me rests in "what outcomes are possible in the diversity of human population matched to a purpose for personal development toward an ideal." No match can be found in terms of this creative attachment which can arise distinctive to each person's in probable outcome. The difference here is that "if" all knowledge already existed, and "if" we were to have access to this "collective unconscious per say," then how it is that such thinking cannot point toward new paradigms for personal development that are developed in society? New science? AI Intelligence already has all these knowledge factors inclusive, so it can give outcomes according to a "quantum leap??":) No, it needs human intervention, or AI can already give us that new science? You see? There would be "no need" for an Einstein?


Thursday, September 24, 2009

DNA Computing

DNA computing is a form of computing which uses DNA, biochemistry and molecular biology, instead of the traditional silicon-based computer technologies. DNA computing, or, more generally, molecular computing, is a fast developing interdisciplinary area. Research and development in this area concerns theory, experiments and applications of DNA computing See:DNA computing


Clifford of Asymptotia is hosting a guest post by Len Adleman: Quantum Mechanics and Mathematical Logic.

Today I’m pleased to announce that we have a guest post from a very distinguished colleague of mine, Len Adleman. Len is best known as the “A” in RSA and the inventor of DNA-computing. He is a Turing Award laureate. However, he considers himself “a rank amateur” (his words!) as a physicist.

Len Adleman-For a long time, physicists have struggled with perplexing “meta-questions” (my phrase): Does God play dice with the universe? Does a theory of everything exist? Do parallel universes exist? As the physics community is acutely aware, these are extremely difficult questions and one may despair of ever finding meaningful answers. The mathematical community has had its own meta-questions that are no less daunting: What is “truth”? Do infinitesimals exist? Is there a single set of axioms from which all of mathematics can be derived? In what many consider to be on the short list of great intellectual achievements, Frege, Russell, Tarski, Turing, Godel, and other logicians were able to clear away the fog and sort these questions out. The framework they created, mathematical logic, has put a foundation under mathematics, provided great insights and profound results. After many years of consideration, I have come to believe that mathematical logic, suitably extended and modified (perhaps to include complexity theoretic ideas), has the potential to provide the same benefits to physics. In the following remarks, I will explore this possibility.


See Also:
  • Riemann Hypothesis: A Pure Love of Math

  • Ideas on Quantum Interrogation

  • Mersenne Prime: One < the Power of two

  • Lingua Cosmica
  • Tuesday, September 22, 2009

    Correlating Gravitational Wave Production in LIGO

    Drawing by Glen Edwards, Utah State University, Logan, UT

    The most important thing is to be motivated by your own intellectual curiosity.KIP THORNE


    Fig. 1. The four forces (or interactions) of Nature, their force carrying particles and the phenomena or particles affected by them. The three interactions that govern the microcosmos are all much stronger than gravity and have been unified through the Standard Model


    Dr. Kip Thorne, Caltech 01-Relativity-The First 20th Century Revolution


    Why are two installations necessary?


    See: LIGO Listens for Gravitational Echoes of the Birth of the Universe

    Results set new limits on gravitational waves originating from the Big Bang; constrain theories about universe formation

    Pasadena, Calif.—An investigation by the LIGO (Laser Interferometer Gravitational-Wave Observatory) Scientific Collaboration and the Virgo Collaboration has significantly advanced our understanding of the early evolution of the universe.

    Analysis of data taken over a two-year period, from 2005 to 2007, has set the most stringent limits yet on the amount of gravitational waves that could have come from the Big Bang in the gravitational wave frequency band where LIGO can observe. In doing so, the gravitational-wave scientists have put new constraints on the details of how the universe looked in its earliest moments.

    Much like it produced the cosmic microwave background, the Big Bang is believed to have created a flood of gravitational waves—ripples in the fabric of space and time—that still fill the universe and carry information about the universe as it was immediately after the Big Bang. These waves would be observed as the "stochastic background," analogous to a superposition of many waves of different sizes and directions on the surface of a pond. The amplitude of this background is directly related to the parameters that govern the behavior of the universe during the first minute after the Big Bang.

    Earlier measurements of the cosmic microwave background have placed the most stringent upper limits of the stochastic gravitational wave background at very large distance scales and low frequencies. The new measurements by LIGO directly probe the gravitational wave background in the first minute of its existence, at time scales much shorter than accessible by the cosmic microwave background.
    The research, which appears in the August 20 issue of the journal Nature, also constrains models of cosmic strings, objects that are proposed to have been left over from the beginning of the universe and subsequently stretched to enormous lengths by the universe's expansion; the strings, some cosmologists say, can form loops that produce gravitational waves as they oscillate, decay, and eventually disappear.

    Gravitational waves carry with them information about their violent origins and about the nature of gravity that cannot be obtained by conventional astronomical tools. The existence of the waves was predicted by Albert Einstein in 1916 in his general theory of relativity. The LIGO and GEO instruments have been actively searching for the waves since 2002; the Virgo interferometer joined the search in 2007.

    The authors of the new paper report that the stochastic background of gravitational waves has not yet been discovered. But the nondiscovery of the background described in the Nature paper already offers its own brand of insight into the universe's earliest history.


    Saturday, September 19, 2009

    Macroscopic Similarities in a Microscopic World

    Berkeley Lab Technology Dramatically Speeds Up Searches of Large DatabasesJon Bashor

    In the world of physics, one of the most elusive events is the creation and detection of “quark-gluon plasma,” the theorized atomic outcome of the “Big Bang” which could provide insight into the origins of the universe. By using experiments that involve millions of particle collisions, researchers hope to find unambiguous evidence of quark-gluon plasma.

    It's not just about "mathematical abstraction" but of seeing what good it can be used for. One can be in denial about the prospects but while it gives perspective to current situations, in that it helps to direct thinking forward instead feeling as if "you are just floating in space without being able to move."

    Helpless are we? Not considering flapping one's wings?

    Imagine indeed then,  trying to orientate direction toward the spacecraft when "floating in space" seems like having to attempt to ride a bicycle for the first time, so one should  know we must balance ourselves while doing the appropriate movements directed to where we want to go. It's something that has to be learn in theoretical enterprise while still held to earth's environ?

    There might be a middle way. String theory's mathematical tools were designed to unlock the most profound secrets of the cosmos, but they could have a far less esoteric purpose: to tease out the properties of some of the most complex yet useful types of material here on Earth.

    Both string theorists and condensed matter physicists - those studying the properties of complex matter phases such as solids and liquids - are enthused by the development. "I am flabbergasted," says Jan Zaanen, a condensed matter theorist from the University of Leiden in the Netherlands. "The theory is calculating precisely what we are seeing in experiments."
    See:What string theory is really good for

    So how has this helped the idea of "minimum length?"

    Using the anti–de Sitter/conformal field theory correspondence to relate fermionic quantum critical fields to a gravitational problem, we computed the spectral functions of fermions in the field theory. By increasing the fermion density away from the relativistic quantum critical point, a state emerges with all the features of the Fermi liquid. See:String Theory, Quantum Phase Transitions, and the Emergent Fermi Liquid
    So we have a beginning here for consideration within the frame work of Condense matter theorist state of existence? String theory is working along side of to direct the idea of matter formation?


    Our work is about comparing the data we collect in the STAR detector with modern calculations, so that we can write down equations on paper that exactly describe how the quark-gluon plasma behaves," says Jerome Lauret from Brookhaven National Laboratory. "One of the most important assumptions we've made is that, for very intense collisions, the quark-gluon plasma behaves according to hydrodynamic calculations in which the matter is like a liquid that flows with no viscosity whatsoever."

    Proving that under certain conditions the quark-gluon plasma behaves according to such calculations is an exciting discovery for physicists, as it brings them a little closer to understanding how matter behaves at very small scales. But the challenge remains to determine the properties of the plasma under other conditions.

    "We want to measure when the quark-gluon plasma behaves like a perfect fluid with zero viscosity, and when it doesn't," says Lauret. "When it doesn't match our calculations, what parameters do we have to change? If we can put everything together, we might have a model that reproduces everything we see in our detector."
    See:Probing the Perfect Liquid with the STAR Grid

    Looking back in time toward the beginning of our universe has been one of the things that have been occupying my time as I look through experimental procedures that have been developed. While LHC  provides a template of all the historical drama of science put forward,  it is also a platform in my mind for pushing forward perspective from "a beginning of time scenario" that helps us identify what happens in that formation. Helps us to orientate space and what happens to it.

    It provides for me a place where we can talk about a large scale situation in terms of the universe as to what it contains to help motivate this universe to become what it is.

    Cycle of Birth, Life, and Death-Origin, Indentity, and Destiny by Gabriele Veneziano

    In one form or another, the issue of the ultimate beginning has engaged philosophers and theologians in nearly every culture. It is entwined with a grand set of concerns, one famously encapsulated in an 1897 painting by Paul Gauguin: D'ou venons-nous? Que sommes-nous? Ou allons-nous? "Where do we come from? What are we? Where are we going?"
    See here for more information.

    So how did this process help orientate the things that were brought forward under the idea that the universe is a "cosmological box" that people want to talk about, while in my mind ,it became much more flexible topic when Venezianno began to talk about what came before. What existed outside that box. Abstractly, the box had six faces, to which direction of possibilities became part of the depth of this situation. It was a matter indeed of thinking outside the box.

    I know that for some,  why waste one's time, but for me it is the motivator( not God as a creator, but of what actually propels this universe) and to what can exist now that draws my attention. It has been ever so slightly pushed "back in time" to see that the universe began with "microscopic processes that defines the state of the state of the universe in the way it is now." The LHC should be able to answer this although it is still restricted by the energy valuation given to this process.

    A magnet levitating above a high-temperature superconductor, cooled with liquid nitrogen. Theoretical physicists have now used string theory to describe the quantum-critical state of electrons that can lead to high-temperature superconductivity. (Credit: Mai-Linh Doan / Courtesy of Wikimedia Commons) See:

    Physical Reality Of String Theory Shown In Quantum-critical State Of Electrons

    Quantum soup

    But now, Zaanen, together with his colleagues Cubrovic and Schalm, are trying to change this situation, by applying string theory to a phenomenon that physicists, including Zaanen, have for the past fifteen years been unable to explain: the quantum-critical state of electrons. This special state occurs in a material just before it becomes superconductive at high temperature. Zaanen describes the quantum-critical state as a 'quantum soup', whereby the electrons form a collective independent of distances, where the electrons exhibit the same behaviour at small quantum mechanical scale or at macroscopic human scale.
    See  Also:

    Fermions and the AdS/CFT correspondence: quantum phase transitions and the emergent Fermi-liquid

    A central mystery in quantum condensed matter physics is the zero temperature quantum phase transition between strongly renormalized Fermi-liquids as found in heavy fermion intermetallics and possibly high Tc superconductors. Field theoretical statistical techniques are useless because of the fermion sign problem, but we will present here results showing that the mathematics of string theory is capable of describing fermionic quantum critical states. Using the Anti-de-Sitter/Conformal Field Theory (AdS/CFT) correspondence to relate fermionic quantum critical fields to a gravitational problem, we compute the spectral functions of fermions in the field theory. Deforming away from the relativistic quantum critical point by increasing the fermion density we show that a state emerges with all the features of the Fermi-liquid. Tuning the scaling dimensions of the critical fermion fields we find that the quasiparticle disappears at a quantum phase transition of a purely statistical nature, not involving any symmetry change. These results are obtained by computing the solutions of a classical Dirac equation in an AdS space time containing a Reissner-Nordstrom black hole, where the information regarding Fermi-Dirac statistics in the field theory is processed by quasi-normal Dirac modes at the outer horizon.

    Wednesday, September 16, 2009

    FLAMINGOS-2 Achieves First Light Milestone

    Figure 1: FLAMINGOS-2 image of the Tarantula Nebula (30 Doradus) located in the Large Magellanic Cloud, a satellite galaxy to the Milky Way. A concentration of massive young stars in the very center of the cluster is causing the hydrogen gas to fluoresce due to excitation by ultraviolet light. This 3-color composite image combines the J-band (1.25 microns, blue), H-band (1.65 microns, green) and Ks-band (2.2 microns, red). The image has a total exposure (integration) of less than 10 minutes and a resolution of about 0.6 arcsecond. Credit: Gemini Observatory/University of Florida/AURA/Anthony Gonzalez
    As part of on-going acceptance testing, FLAMINGOS-2 (Florida Multi-object Infrared Grism Observing Spectrograph) obtained first light images on the Gemini South telescope. Several images from this first observing run are shown here (Figures 1 & 2) and demonstrate the instrument’s initial performance. The telescope and FLAMINGOS-2 together produced high-quality images, as good as 0.4-arcsecond FWHM. 

    The efforts of the University of Florida instrument team, led by Stephen Eikenberry, and a large number of Gemini staff made achievement of this important step possible. The initial tasks completed include basic alignment of the instrument with the telescope and initial checks of the functionality of imaging and longslit spectroscopy modes. 

    A number of significant milestones must be reached before FLAMINGOS-2 will be available for Gemini community scientific use. One severe limitation now is the lack of a science-grade detector. Several more observing runs are planned through the end of the current semester to fully commission the instrument and integrate it with the telescope, including tests of a new detector. The array will need to be fully characterized, with measurements of plate scale, linearity, and sensitivity across the usable bandpass, and then the throughput and image quality for all modes will be measured. See more here

    Monday, September 14, 2009

    Where Susskind leaves off, Seth Lloyd begins

    A picture, a photograph, or a painting is not the real world that it depicts. It's flat, not full with three dimensional depth like the real thing. Look at it from the side-almost edge on. It doesn't look anything like the real scene view from a angle. In short it's two dimensional while the world is three dimensional. The artist, using perceptual sleight of hand, has conned you into producing a three dimensional image in your brain, but in fact the information just isn't there to form a three dimensional model of the scene. There is no way to tell if that figure is a distant giant or a close midget There is no way to tell if the figure is made of plaster or if it's filled with blood or guts. The brain is providing information that is not really present in the painted strokes on the canvas or the darken grains of silver on the photographic surface. The Cosmic Landscape by Leonard Susskind, page 337 and 338
    See:The elephant and the event horizon 26 October 2006 by Amanda Gefter at New Scientist.

    So while we design our methods of picturing how the universe looks, it is by design of the experimental procedures that we have pushed perspective toward the "depth of imaging"  that we design our views of what we propose is happening . So this then is a method based on the Gedankin that allows "an alternate view of the reality" of  what is happening inside the blackhole that was "thought of"  before we master  putting the perspective of what actually happens outside.

    Gedanken Experiments Involving Black Holes


    Analysis of several gedanken experiments indicates that black hole complementarity cannot be ruled out on the basis of known physical principles. Experiments designed by outside observers to disprove the existence of a quantum-mechanical stretched horizon require knowledge of Planck-scale effects for their analysis. Observers who fall through the event horizon after sampling the Hawking radiation cannot discover duplicate information inside the black hole before hitting the singularity. Experiments by outside observers to detect baryon number violation will yield significant effects well outside the stretched horizon.



     At 11:20 AM, September 13, 2009, Blogger Bee said-The Schwarzschild radius depends on the mass, it thus doesn't define a fixed length. If one ties the Schwarzchild radius to the Compton wavelength via the uncertainty principle, one obtains a length and a mass, which is exactly the Planck length and Planck mass
    While entertaining the issues put forward by "The Minimal Length in Quantum Gravity: An Outside View" some issues came to mind for pushing forward proposals that are current in science toward identification of how we can look at the inside of a blackhole with information postulated by illumination "outside."

    Seth Lloyd is a professor of mechanical engineering at Massachusetts Institute of Technology. He refers to himself as a "quantum mechanic".

    While one recognizes the relationship Susskind had pointed out by doing thought experiments in relation to what processes allow us to search "inside the blackhole" it is information that is "not lost"  that allows us to understand what is actually happening with time that moves within the blackhole's internal direction . This then is an "outside perspective" of what is held in contention to Planck's length that we might ask what the heck actually exist inside that we are all speculating about?

    Quantum Entanglement Benefits Exist after Links Are Broken

    By Charles Q. Choi

    “Spooky action at a distance” is how Albert Einstein famously derided the concept of quantum entanglement—where objects can become linked and instantaneously influence one another regardless of distance. Now researchers suggest that this spooky action in a way might work even beyond the grave, with its effects felt after the link between objects is broken.

    In experiments with quantum entanglement, which is an essential basis for quantum computing and cryptography, physicists rely on pairs of photons. Measuring one of an entangled pair immediately affects its counterpart, no matter how far apart they are theoretically. The current record distance is 144 kilometers, from La Palma to Tenerife in the Canary Islands.

    In practice, entanglement is an extremely delicate condition. Background disturbances readily destroy the state—a bane for quantum computing in particular, because calculations are done only as long as the entanglement lasts. But for the first time, quantum physicist Seth Lloyd of the Massachusetts Institute of Technology suggests that memories of entanglement can survive its destruction. He compares the effect to Emily Brontë’s novel Wuthering Heights: “the spectral Catherine communicates with her quantum Heathcliff as a flash of light from beyond the grave.”

    The insight came when Lloyd investigated what happened if entangled photons were used for illumination. One might suppose they could help take better pictures. For instance, flash photography shines light out and creates images from photons that are reflected back from the object to be imaged, but stray photons from other objects could get mistaken for the returning signals, fuzzing up snapshots. If the flash emitted entangled photons instead, it would presumably be easier to filter out noise signals by matching up returning photons to linked counterparts kept as references.

    Still, given how fragile entanglement is, Lloyd did not expect quantum illumination to ever work. But “I was desperate,” he recalls, keen on winning funding from a Defense Advanced Research Projects Agency’s sensor program for imaging in noisy environments. Surprisingly, when Lloyd calculated how well quantum illumination might perform, it apparently not only worked, but “to gain the full enhancement of quantum illumination, all entanglement must be destroyed,” he explains.

    Lloyd admits this finding is baffling—and not just to him. Prem Kumar, a quantum physicist at Northwestern University, was skeptical of any benefits from quantum illumination until he saw Lloyd’s math. “Everyone’s trying to get their heads around this. It’s posing more questions than answers,” Kumar states. “If entanglement does not survive, but you can seem to accrue benefits from it, it may now be up to theorists to see if entanglement is playing a role in these advantages or if there is some other factor involved.”

    As a possible explanation, Lloyd suggests that although entanglement between the photons might technically be completely lost, some hint of it may remain intact after a measurement. “You can think of photons as a mixture of states. While most of these states are no longer entangled, one or a few remain entangled, and it is this little bit in the mixture that is responsible for this effect,” he remarks.

    If quantum illumination works, Lloyd suggests it could boost the sensitivity of radar and x-ray systems as well as optical telecommunications and microscopy by a millionfold or more. It could also lead to stealthier military scanners because they could work even when using weaker signals, making them easier to conceal from adversaries. Lloyd and his colleagues detailed a proposal for practical implementation of quantum illumination in a paper submitted in 2008 to Physical Review Letters building off theoretical work presented in the September 12 Science. See: more here
    See Also:

    Myths about the minimal length by Lubos Motl

    Saturday, September 12, 2009

    Cosmic Origins Spectrograph in Hubble

    Credit: NASA

    Instrument Overview

    COS is designed to study the large-scale structure of the universe and how galaxies, stars and planets formed and evolved. It will help determine how elements needed for life such as carbon and iron first formed and how their abundances have increased over the lifetime of the universe.
    As a spectrograph, COS won’t capture the majestic visual images that Hubble is known for, but rather it will perform spectroscopy, the science of breaking up light into its individual components. Any object that absorbs or emits light can be studied with a spectrograph to determine its temperature, density, chemical composition and velocity.

    A primary science objective for COS is to measure the structure and composition of the ordinary matter that is concentrated in what scientists call the ‘cosmic web’—long, narrow filaments of galaxies and intergalactic gas separated by huge voids. The cosmic web is shaped by the gravity of the mysterious, underlying cold dark matter, while ordinary matter serves as a luminous tracery of the filaments. COS will use scores of faint distant quasars as ‘cosmic flashlights,’ whose beams of light have passed through the cosmic web. Absorption of this light by material in the web will reveal the characteristic spectral fingerprints of that material. This will allow Hubble observers to deduce its composition and its specific location in space. See: Hubble Space Telescope Service Mission 4- Cosmic Origins Spectrograph


    Cosmic Origins Spectrograph optical path: The FUV and NUV channels initially share a common path. The first optic is either a concave, holographically ruled diffraction grating which directs light to the FUV detector (red) or a concave mirror directing light to the NUV gratings and the NUV detector (purple). The green ray packets represent the FUV optical paths, and blue ray packets represent the NUV optical paths. A wavelength reference and flat field delivery system is shown at top left (orange ray packets) and can provide simultaneous wavelength reference spectra during science observations.
    See:Cosmic Origins Spectrograph

    Thursday, September 10, 2009

    Hubble Opens New Eyes on the Universe

    Image above: From the left are astronauts Michael J. Massimino, Michael T. Good, both mission specialists; Gregory C. Johnson, pilot; Scott D. Altman, commander; K. Megan McArthur, John M. Grunsfeld and Andrew J. Feustel, all mission specialists. Image credit: NASA

    Veteran astronaut Scott D. Altman commanded the final space shuttle mission to Hubble. Retired Navy Capt. Gregory C. Johnson served as pilot. Mission specialists included veteran spacewalkers John M. Grunsfeld and Michael J. Massimino and first-time space fliers Andrew J. Feustel, Michael T. Good and K. Megan McArthur.

    Atlantis’ astronauts repaired and upgraded the Hubble Space Telescope, conducting five spacewalks during their mission to extend the life of the orbiting observatory. They successfully installed two new instruments and repaired two others, bringing them back to life, replaced gyroscopes and batteries, and added new thermal insulation panels to protect the orbiting observatory. The result is six working, complementary science instruments with capabilities beyond what was available and an extended operational lifespan until at least 2014.

    With the newly installed Wide Field Camera, Hubble will be able to observe in ultraviolet and infrared spectrums as well as visible light, peer deep onto the cosmic frontier in search of the earliest star systems and study planets in the solar system. The telescope’s new Cosmic Origins Spectrograph will allow it to study the grand-scale structure of the universe, including the star-driven chemical evolution that produce carbon and the other elements necessary for life. See: STS-125 Mission Information


    Credit: NASA, ESA, and the Hubble SM4 ERO Team-These four images are among the first observations made by the new Wide Field Camera 3 aboard the upgraded NASA Hubble Space Telescope.

    The image at top left shows NGC 6302, a butterfly-shaped nebula surrounding a dying star. At top right is a picture of a clash among members of a galactic grouping called Stephan's Quintet. The image at bottom left gives viewers a panoramic portrait of a colorful assortment of 100,000 stars residing in the crowded core of Omega Centauri, a giant globular cluster. At bottom right, an eerie pillar of star birth in the Carina Nebula rises from a sea of greenish-colored clouds.


    September 9, 2009: NASA's Hubble Space Telescope is back in business, ready to uncover new worlds, peer ever deeper into space, and even map the invisible backbone of the universe. The first snapshots from the refurbished Hubble showcase the 19-year-old telescope's new vision. Topping the list of exciting new views are colorful multi-wavelength pictures of far-flung galaxies, a densely packed star cluster, an eerie "pillar of creation," and a "butterfly" nebula. With its new imaging camera, Hubble can view galaxies, star clusters, and other objects across a wide swath of the electromagnetic spectrum, from ultraviolet to near-infrared light. A new spectrograph slices across billions of light-years to map the filamentary structure of the universe and trace the distribution of elements that are fundamental to life. The telescope's new instruments also are more sensitive to light and can observe in ways that are significantly more efficient and require less observing time than previous generations of Hubble instruments. NASA astronauts installed the new instruments during the space shuttle servicing mission in May 2009. Besides adding the instruments, the astronauts also completed a dizzying list of other chores that included performing unprecedented repairs on two other science instruments.An Early Observation Release

    Friday, September 04, 2009

    Justice, Open to Interpretation While Reducible to Logic?

    Backreaction: A little less conversation, a little more science please

    This is the continuance of the proposal toward 21st Century View of Politics and good Government. While I had mention Consumerism this was to point out the supporting structure for elected government frameworks for their idealization supported by the vote of the people.

    sknguy II:The notion that something is Just is an appeal to the ideas of Justice. And Justice is a personal concept. There is no natural version of the ideas. And Justice isn't a monolithic concept which a particular person or group can recite as being a true version. Every person, and every generation contributes to the evolving ideas of Justice, or what is Just.

    The laws we write are a way of institutionalizing the ideas of Justice. And laws are the translation of society's perceptions of what Justice means at some point in time. As our perceptions of Justice changes, our laws change and evolve with it.

    So when calling things "Just", as in "just society" or "just government", you're appealing to the notion of what Justice means. If you talk about something as being just, you'll have to accept the fact that it'll also be ever evolving and may be inconsistent from one person to the next. And that what "it" looks like today will likely look quite different years down the road.

    Yes I have been thinking about the way you have described Justice. You have spelt it out very well. I must concede as well to the bold emphasis and recognize it will evolve as you have pointed out. I think though that's my point. What will it evolve too in the 21st Century? I am recognizing all that Justice has become to this point, and in this spirit of democratization asking if our laws have failed in recognition of the swing democracy has taken? Has it?

    One could contend for sure that all is well and we have come back to what you said about our perceptions of Justice. But the idea then is that such evolution could have contain all the "best of the laws that have been written" have swung in favour of decay and that signs within this interpretation has some value recognized as a foundational truth. Whose Truth?

    Although I believe that governance is a personal matter, and that it's the foundation for societal governance, here's the UN's take on "good governance" as a process of decision making:

    Edit: I should clarify that the UN article refers to a kind of oligarchy. Or who can control decision making

    Ah, yes this is very helpful. Some basis from which to work. I have to think about this some more. To this point it recognized that Justice has reached a plateau, and that "Good Governance" is a result of the Laws written to date? If Good Governance is to evolve along with the laws, then the laws will have to change? This then is what we will see in the 21st Century?

    Figure 2: Characteristics of good governance

    The concept of "governance" is not new. It is as old as human civilization. Simply put "governance" means: the process of decision-making and the process by which decisions are implemented (or not implemented). Governance can be used in several contexts such as corporate governance, international governance, national governance and local governance.See:WHAT   IS   GOOD   GOVERNANCE?

    Governments then shall represent the following eight characteristics of Good Governance and if any these Governments show lacking in any of this interpretive representations then we shall see where democracy has been slighted by factors of extreme misuse of democracy?


    From the above discussion it should be clear that good governance is an ideal which is difficult to achieve in its totality.Very few countries and societies have come close to achieving good governance in its totality. However, to ensure sustainable human development, actions must be taken to work towards this ideal with the aim of making it a reality. WHAT   IS   GOOD   GOVERNANCE?

    What would ratio and percentage applied to Governments of the World(Provincial jurisdictions) be according to policies of Good Governance based on UN interpretation? This could cause disenfranchisement of it's participants while recognizing the idea about the possible definitions of Justice and values assigned to those eight characteristics?


    While one may of exhausted the challenge of a "logic forming apparatus to conclude in law" what becomes "self evident" comes under the "Aristotelean view of logic." What remains then, is to push forward with an "objective look" for a solution. What appeals to my mind after this exhaustion was to now consider the subject of,  "lateral movement" which is to produce "new creative moments" toward idea development for this new "21st Century view" in Law??

    This was not inconsistent with Plato's Ideal from idea manifestation toward an ideal per say, but brings us much closer to understanding of the relationship Plato had with Aristotle and the view I am pushing toward the future of societies.

    In this week's edition of The Interview, Edward de Bono tells Lyse Doucet how he became aware of the failings of conventional thought, how he has championed his new way to business leaders, politicians and children, and why he still wants to realise his dream of establishing a Palace of Thinking to encourage a revolution. See:The Interview

    One does not discount the process through deliberation with rigour and analysis to arrive at this shift in perspective? Plato's dialogues serve to propel forward writing in the exchanges toward an ideal Plato himself had, yet this is not to say that the constructs development from such exchange could have not warranted , further examination under historical analysis.

    His contention is that just as language has allowed one generation to pass useful knowledge onto the next, it has also allowed dangerous myths and out-of-date ideas to become enshrined. See:Edward de Bono

    I understood then the reference to Myths and out of date ideas in reference to previous commenter point on "purely logical or reductionist thinking" related too, the article placed for inspection and relates, Edward_de_Bono-"has set out to challenge the logical, truth-seeking process established by the Greek philosophers 2,400 years ago and cemented in Western culture in the Middle Ages by the church."

    It would be interesting then to see what Edward DeBono has to say about "justice as an ideal" and it's relation to current laws of countries in place? This to me would suggest that a future forming perspective according to a timeline from the "past to the future" is an evolutionary one and that such a trend in politics would have to coincide with the development of the laws associated with the governance of that country.

    The concept of "governance" is not new. It is as old as human civilization. Simply put "governance" means: the process of decision-making and the process by which decisions are implemented (or not implemented). Governance can be used in several contexts such as corporate governance, international governance, national governance and local governance.

    What is the the trademark then of Good Governance?

    Logic forming and reductionistic thinking has taken us to this point in time. Then such a request to lateral thinking would have to include all that came before Good Governance in order for Good Governance to evolve to what it is today. What Good Governance shall become in terms of it's laws in the 21st Century??


    See:Oligarchy:A Historical Look from Plato's Dialogues