QUANTA

Thursday, June 30, 2011


Scientists use optogenetics to control reward-seeking behavior

June 30, 2011 

Researchers at the University of North Carolina at Chapel Hill have manipulated brain wiring responsible for reward-seeking behaviors in mice, using optogenetic stimulation targeting the path between two critical brain regions, the amygdala and the nucleus accumbens.

The finding represents potential treatments for addiction and other neuropsychiatric diseases, according to the researchers.

With the optogenetic technique, scientists transfer light-sensitive proteins called “opsins” — proteins derived from algae or bacteria that need light to grow — into the mammalian brain cells they wish to study. Then they shine laser beams onto the genetically manipulated brain cells, either exciting or blocking their activity with millisecond precision.

They used this technique to excite (activate) the connections between the amygdala and the nucleus accumbens, essentially “rewarding” the rodents with laser stimulation when they poked their nose into a hole in their cage. They found that mice genetically treated with light-sensitive opsins  quickly learned to “nose-poke” to receive stimulation of the neural pathway. In comparison, genetically untouched control mice never caught on to the task.

The researchers are now exploring how changes to this segment of brain wiring can either make an animal sensitized to or oblivious to rewards.

The researchers said their approach presents a useful tool for studying basic brain function, and could one day provide a powerful alternative to electrical stimulation or pharmacotherapy for neuropsychiatric illnesses like Parkinson’s disease.

Source: http://goo.gl/qyRAi


Source and/or and/or more resources and/or read more: http://goo.gl/JujXk ─ Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at http://goo.gl/JujXk

How electrons become entangled

June 30, 2011

An international team of researchers led by a scientists at Princeton University have used lasers to peek into the complex relationship between a single electron and its environment, a breakthrough that could aid the development of quantum computers.

The research brings fresh insight to the study of the Kondo problem, a phenomenon first observed in the 1930s, when researchers were surprised to find that resistance to electricity flowing through certain metals increases at very low temperatures. Normally, resistance through metals decreases as temperature is lowered, but that was not the case with these metals.

The researchers investigated the use of a laser to probe electrons evolving into the Kondo state. They first developed a theory about how laser light scattered off electrons could carry information about this process. Depending on the state of the electron, they surmised, it should absorb different colors of laser light to varying degrees. The light reflected back would carry a signature of the entangled quantum state, offering a window into the relationship between the trapped electron and its environment.

To isolate the electrons, used nanostructured devices, small machines built one atom at a time that trap the electrons in small wells. The particles are only provided limited isolation in the wells and so eventually become entangled with a cloud of surrounding electrons in the device.

The researchers tested the idea by projecting a laser beam on the device and measuring the light that was transmitted. The light signature matched theoretical predictions.

Source: http://goo.gl/wWXfj



Source and/or and/or more resources and/or read more: http://goo.gl/JujXk ─ Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at http://goo.gl/JujXk

ULTIMATE LUMINARIES ON MOST ADVANCED INTELLIGENCE AND TECHNOLOGICAL PROGRESSION (AS OF 2007)

“...Some people say that computers can never show true intelligence, whatever that may be. But it seems to me that if very complicated chemical molecules can operate in humans to make them intelligent, then equally complicated electronic circuits can also make computers act in an intelligent way. And if they are intelligent, they can presumably design computers that have even greater complexity and intelligence...” By Dr. Stephen Hawking

“...One consideration that should be taken into account when deciding whether to promote the development of superintelligence is that if superintelligence is feasible, it will likely be developed sooner or later. Therefore, we will probably one day have to take the gamble of superintelligence no matter what. But once in existence, a superintelligence could help us reduce or eliminate other existential risks, such as the risk that advanced nanotechnology will be used by humans in warfare or terrorism, a serious threat to the long-term survival of intelligent life on earth. If we get to superintelligence first, we may avoid this risk from nanotechnology and many others. If, on the other hand, we get nanotechnology first, we will have to face both the risks from nanotechnology and, if these risks are survived, also the risks from superintelligence. The overall risk seems to be minimized by implementing superintelligence, with great care, as soon as possible...” By Dr. Nick Bostrom

“...We have a hard time motivating people to do stuff in the service of abstract nouns like 'liberty,' but 'singularity' is so abstract as to make 'liberty' seem as concrete as 'imminent car-wreck.' The singularity needs to be the mere abstract cherry on the concrete cake: the funny curiosity to consider as the end-point of a bunch of imminent, relevant, concrete changes in our lives that we need to prepare for and prepare the way for...” By Cory Doctorow

“...To any thoughtful person, the singularity idea, even if it seems wild, raises a gigantic, swirling cloud of profound and vital questions about humanity and the powerful technologies it is producing. Given this mysterious and rapidly approaching cloud, there can be no doubt that the time has come for the scientific and technological community to seriously try to figure out what is on humanity's collective horizon. Not to do so would be hugely irresponsible...” By Dr. Douglas R. Hofstadter

“...What, then, is the Singularity? It's a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed. Although neither utopian or dystopian, this epoch will transform the concepts that we rely on to give meaning to our lives, from our business models to the cycle of human life, including death itself. Understanding the Singularity will alter our perspective on the significance of our past and the ramifications for our future. To truly understand it inherently changes one's view of life in general and one's own particular life. I regard someone who understands the Singularity and who has reflected on its implications for his or her own life as a 'singularitarian'...” By Ray Kurzweil

“...The Singularity is a frightening prospect for humanity. I assume that we will somehow dodge it or finesse it in reality, and one way to do that is to warn about it early and begin to build in correctives...” By Stewart Brand

“...It is clear from my work that to tell a truly compelling story, a machine would need to understand the 'inner lives' of his or her characters. And to do that, it would need not only to think mechanically in the sense of swift calculation (the forte of supercomputers like Deep Blue), it would also need to think experientially in the sense of having subjective or phenomenal awareness. For example, a person can think experientially about a trip to Europe as a kid, remember what it was like to be in Paris on a sunny day with an older brother, smash a drive down a fairway, feel a lover's touch, ski on the edge, or need a good night's sleep. But any such example, I claim, will demand capabilities no machine will ever have. Renowned human storytellers understand this concept. For example, playwright Henrik Ibsen said: 'I have to have the character in mind through and through, I must penetrate into the last wrinkle of his soul.' Such a modus operandi is forever closed off to a machine...” By Dr. Selmer Bringsjord

“...There's this stupid myth out there that AI has failed, but AI is everywhere around you every second of the day. People just don't notice it. You've got AI systems in cars, tuning the parameters of the fuel injection systems. When you land in an airplane, your gate gets chosen by an AI scheduling system. Every time you use a piece of Microsoft software, you've got an AI system trying to figure out what you're doing, like writing a letter, and it does a pretty damned good job. Every time you see a movie with computer–generated characters, they're all little AI characters behaving as a group. Every time you play a video game, you're playing against an AI system...” By Dr. Rodney Brooks

“...If there is a key driving force pushing towards a singularity, it's international competition for power. This ongoing struggle for power and security is why, in my view, attempts to prevent a singularity simply by international fiat are doomed. The potential capabilities of transformative technologies are simply staggering. No nation will risk falling behind its competitors, regardless of treaties or UN resolutions banning intelligent machines or molecular–scale tools. The uncontrolled global transformation these technologies may spark is, in strategic terms, far less of a threat than an opponent having a decided advantage in their development - a 'singularity gap,' if you will. The 'missile gap' that drove the early days of the nuclear arms race would pale in comparison...” By Jamais Cascio

“...The world is Organized by embodied beings like us to be coped with by beings like us. The computer would be totally lost in our world. It would have to have in it a model of the world and a model of the body, which AI researchers have tried, but it's certainly hopeless. Without that, the world is just utterly un-graspable by computers .... The truth is that human intelligence can never be replaced with machine intelligence simply because we are not ourselves thinking machines. Each of us has, and uses every day, a power of intuitive intelligence that enables us to understand, to speak, and to cope skilfully with our everyday environment...” By Dr. Hubert Dreyfus

“...If you invent a breakthrough in artificial intelligence, so machines can learn, that is worth 10 Microsofts...” By Bill Gates

“...There is no good reason to believe that the emergence of the modern human mind is the end state of the evolution of psyche. Indeed, the rub is this: While evolution might take millions of years to generate another psychological sea change as dramatic as the emergence of modern humanity, technology may do the job much more expediently. The Singularity can be expected to induce rapid and dramatic change in the nature of life, mind and experience...” By Dr. Ben Goertzel

“...It's haughty of us to think we're the end product of evolution. All of us are a part of producing whatever is coming next. We're at an exciting time. We're close to the singularity. Go back to that litany of chemistry leading to single–celled organisms, leading to intelligence. The first step took a billion years, the next step took a hundred million, and so on. We're at a stage where things change on the order of decades, and it seems to be speeding up. Technology has the autocatalytic effect of fast computers, which let us design better and faster computers faster. We're heading toward something which is going to happen very soon – in our lifetimes – and which is fundamentally different from anything that's happened in human history before...” By Dr. W. Daniel Hillis

“...The 21st–century technologies — genetics, nanotechnology, and robotics (GNR) – are so powerful that they can spawn whole new classes of accidents and abuses. Most dangerously, for the first time, these accidents and abuses are widely within the reach of individuals or small groups. They will not require large facilities or rare raw materials. Knowledge alone will enable the use of them. Thus we have the possibility not just of weapons of mass destruction but of knowledge–enabled mass destruction (KMD), this destructiveness hugely amplified by the power of self–replication. I think it is no exaggeration to say we are on the cusp of the further perfection of extreme evil, an evil whose possibility spreads well beyond that which weapons of mass destruction bequeathed to the nation–states, on to a surprising and terrible empowerment of extreme individuals...” By Bill Joy

“...Every cybernetic totalist fantasy relies on artificial intelligence. It might not immediately be apparent why such fantasies are essential to those who have them. If computers are to become smart enough to design their own successors, initiating a process that will lead to God-like omniscience after a number of ever-swifter passages from one generation of computers to the next, someone is going to have to write the software that gets the process going, and humans have given absolutely no evidence of being able to write such software. So the idea is that the computers will somehow become smart on their own and write their own software ... My primary objection to this way of thinking is pragmatic: It results in the creation of poor-quality real-world software in the present. Cybernetic totalists live with their heads in the future and are willing to accept obvious flaws in present software in support of a fantasy world that might never appear ... The whole enterprise of artificial intelligence is based on an intellectual mistake, and continues to expensively turn out poorly designed software as it is remarketed under a new name for every new generation of programmers...” By Jaron Lanier

“...Two quite detailed scenarios have emerged, one the Moravec/Kurzweil scenario, which we might call the 'Out to Pasture in the Elysian Fields,' that foresees machines as intelligent as humans, maybe more so, in 50 years and on the whole, a good thing. This leads to questions both Moravec and Kurzweil, to their credit, raise about whether those machines will take over for us (or from us), the basis of the second scenario, Bill Joy's quite opposite and dark vision, which posits the same improvement in machine intelligence, but with a horrifying outcome, the 'NanoGenRoboNightmare.' Some believers in the Elysian fields scenario have been arguing about 'the singularity,' borrowed from science fiction writer Vernor Vinge, the moment AI becomes powerful and ubiquitous enough so that all of the rules change and there's no going back. [...] I don't consider either of these scenarios implausible...” Pamela McCorduck

“...We need to do an unlikely thing: we need to survey the world we now inhabit and proclaim it good. Good enough. Not in every detail; there are a thousand improvements, technological and cultural, that we can and should still make. But good enough in its outlines, in its essentials. We need to decide that we live, most of us in the West, long enough. We need to declare that, in the West, where few of us work ourselves to the bone, we have ease enough. In societies where most of us need storage lockers more than we need nanotech miracle boxes, we need to declare that we have enough stuff. Enough intelligence. Enough capability. Enough...” By Bill McKibben

“...Only a small community has concentrated on general intelligence. No one has tried to make a thinking machine and then teach it chess — or the very sophisticated oriental board game Go. [...] The bottom line is that we really haven't progressed too far toward a truly intelligent machine. We have collections of dumb specialists in small domains; the true majesty of general intelligence still awaits our attack. [...] We have got to get back to the deepest questions of AI and general intelligence and quit wasting time on little projects that don't contribute to the main goal...” By Dr. Marvin Minsky

“...It may seem rash to expect fully intelligent machines in a few decades, when the computers have barely matched insect mentality in a half–century of development. Indeed, for that reason, many long–time artificial intelligence researchers scoff at the suggestion, and offer a few centuries as a more believable period. But there are very good reasons why things will go much faster in the next fifty years than they have in the last fifty... Since 1990, the power available to individual AI and robotics programs has doubled yearly, to 30 MIPS (machine instructions per second) by 1994 and 500 MIPS by 1998. Seeds long ago alleged barren are suddenly sprouting. Machines read text, recognize speech, even translate languages. Robots drive cross–country, crawl across Mars, and trundle down office corridors. In 1996 a theorem–proving program called EQP running five weeks on a 50 MIPS computer at Argonne National Laboratory found a proof of a Boolean algebra conjecture by Herbert Robbins that had eluded mathematicians for sixty years. And it is still only Spring. Wait until Summer...” By Dr. Hans Moravec

“...In the end, this search for ways to enhance ourselves is a natural part of being human. The urge to transform ourselves has been a force in history as far back as we can see. It's been selected for by millions of years of evolution. It's wired deep in our genes — a natural outgrowth of our human intelligence, curiosity, and drive. To turn our backs on this power would be to turn our backs on our true nature. Embracing our quest to understand and improve on ourselves doesn't call into question our humanity — it reaffirms it...” By Ramez Naam

“...I want to focus on a different aspect of Ken MacLeod's 'Rapture of the Nerds' comment, because I actually think it cuts both ways. Yes, it's possible to draw parallels between the Christian idea of The Rapture — and, even more generally, between religious ideas of transcendence generally — and the notion that, once human technology passes a certain threshold, roughly that described by Vinge and other singularity enthusiasts, human beings will potentially enjoy the kind of powers and pleasures traditionally assigned to gods or beings in heaven: Limitless lifespans, if not immortality, superhuman powers, virtually limitless wealth, fleshly pleasures on demand, etc. .... These do sound like the sorts of things that religions have promised their followers throughout human history. That leads some who invoke MacLeod's comment to contend that because singularity enthusiasts hope for the same kinds of things that religious believers have hoped for, singularity enthusiasts are merely adherents to a new sort of religion, the religion of science ... But as Isaac Asimov has noted, the religion of science is distinguished by one chief characteristic: 'that it works.' I express no opinion on whether science will actually deliver on these hopes. But I note that people once looked to supernatural sources for such now-mundane things as cures for baldness or impotence, only to find those desires satisfied, instead, by modern pharmacology. Yet that hardly makes those who place their faith in pharmacology members of a religion — or, if it does, it makes them members of a religion that is distinguishable from those dependent on the supernatural...” By Glenn Harland Reynolds

“...…'Could a machine think?' My own view is that only a machine could think, and indeed only very special kinds of machines, namely brains and machines that had the same causal powers as brains. And that is the main reason strong AI has had little to tell us about thinking, since it has nothing to tell us about machines. By its own definition, it is about programs, and programs are not machines. Whatever else intentionality is, it is a biological phenomenon, and it is as likely to be as causally dependent on the specific biochemistry of its origins as lactation, photosynthesis, or any other biological phenomena. No one would suppose that we could produce milk and sugar by running a computer simulation of the formal sequences in lactation and photosynthesis, but where the mind is concerned many people are willing to believe in such a miracle because of a deep and abiding dualism: the mind they suppose is a matter of formal processes and is independent of quite specific material causes in the way that milk and sugar are not.....” By Dr. John Searle

“...Before the invention of writing, almost every insight was happening for the first time (at least to the knowledge of the small groups of humans involved). When you are at the beginning, everything is new. In our era, almost everything we do in the arts is done with awareness of what has been done before. In the early post–human era, things will be new again because anything that requires greater than human ability has not already been done by Homer or da Vinci or Shakespeare...” By Dr. Vernor Vinge

“...I certainly think that humans are not the limit of evolutionary complexity. There may indeed be post–human entities, either organic or silicon–based, which can in some respects surpass what a human can do. I think it would be rather surprising if our mental capacities were matched to understanding all the keys levels of reality. The chimpanzees certainly aren't, so why should ours be either? So there may be levels that will have to await some post-human emergence...” By Sir Martin Rees

Source: http://singinst.org/summit2007/quotes/



Source and/or and/or more resources and/or read more: http://goo.gl/JujXk ─ Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at http://goo.gl/JujXk

Wednesday, June 29, 2011

STATE OF THE UNIVERSE: AN EXECUTIVE SUMMARY

As we mark the fifth anniversary of our annual study of the digital universe, it behooves us to take stock of what we have learned about it over the years.  We always knew it was big – in 2010 cracking the zettabyte barrier.  In 2011, the amount of information created and replicated will surpass 1.8 zettabytes (1.8 trillion gigabytes) - growing by a factor of 9 in just five years.

But, as digital universe cosmologists, we have also uncovered a number of other things — some predictable, some astounding, and some just plain disturbing.

While 75% of the information in the digital universe is generated by individuals, enterprises have some liability for 80% of information in the digital universe at some point in its digital life.  

The number of "files," or containers that encapsulate the information in the digital universe, is growing even faster than the information itself as more and more embedded systems pump their bits into the digital cosmos. In the next five years, these files will grow by a factor of 8, while the pool of IT staff available to manage them will grow only slightly.

Less than a third of the information in the digital universe can be said to have at least minimal security or protection; only about half the information that should be protected is protected.
The amount of information individuals create themselves — writing documents, taking pictures, downloading music, etc. — is far less than the amount of information being created about them in the digital universe.

The growth of the digital universe continues to outpace the growth of storage capacity. But keep in mind that a gigabyte of stored content can generate a petabyte or more of transient data that we typically don't store (e.g., digital TV signals we watch but don't record, voice calls that are made digital in the network backbone for the duration of a call).  

So, like our physical universe, the digital universe is something to behold — 1.8 trillion gigabytes in 500 quadrillion "files" — and more than doubling every two years. That's nearly as many bits of information in the digital universe as stars in our physical universe.  
 
However, unlike our physical universe where matter is neither created nor destroyed, our digital universe is replete with bits of data that exist but for a moment — enough time for our eyes or ears to ingest the information before the bits evaporate into a nonexistent digital dump.

This is not to diminish the value of the temporary existence of these bits that can serve a variety of purposes during their short lives, such as driving consumption (to increase ad revenue from Web site traffic) or real-time data analytics (to optimize existing operations and create entirely new markets).

What are the forces behind the explosive growth of the digital universe? Certainly technology has helped by driving the cost of creating, capturing, managing, and storing information down to one-sixth of what it was in 2005. But the prime mover is financial. Since 2005, the investment by enterprises in the digital universe has increased 50% — to $4 trillion. That's money spent on hardware, software, services, and staff to create, manage, and store — and derive revenues from — the digital universe.  

In an information society, information is money. The trick is to generate value by extracting the right information from the digital universe — which, at the microcosmic level familiar to the average CIO, can seem as turbulent and unpredictable as the physical universe.

In fact, thanks to new tools and technologies, and new IT and organizational practices, we may be on the threshold of a major period of exploration of the digital universe. The convergence of technologies now makes it possible not only to transform the way business is conducted and managed but also to alter the way we work and live.

Considerations

New capture, search, discovery, and analysis tools can help organizations gain insights from their unstructured data, which accounts for more than 90% of the digital universe. These tools can create data about data automatically, much like facial recognition routines that help tag Facebook photos. Data about data, or metadata, is growing twice as fast as the digital universe as a whole.

Business intelligence tools increasingly are dealing with real-time data, whether it's charging auto insurance premiums based on where people drive, routing power through the intelligent grid, or changing marketing messages on the fly based on social networking responses.  

New storage management tools are available to cut the costs of the part of the digital universe we store, such as deduplication, auto-tiering, and virtualization, as well as to help us decide what exactly to store, as in content management solutions.

An entire industry has grown up to help us follow the rules (laws, regulations, and customs) pertaining to information in the enterprise. It is now possible to get regulatory compliance systems built into storage management systems.

New security practices and tools can help enterprises identify the information that needs to be secured and at what level of security and then secure the information using specific threat protection devices and software, fraud management systems, or reputation protection services.

Cloud computing solutions — both public and private and a combination of the two known as hybrid — provide enterprises with new levels of economies of scale, agility, and flexibility compared with traditional IT environments. In the long term, this will be a key tool for dealing with the complexity of the digital universe (see Figure 1).

Cloud computing is enabling the consumption of IT as a service. Couple that with the "big data" phenomenon, and organizations increasingly will be motivated to consume IT as an external service versus internal infrastructure investments.  

Journey to the Cloud  

As the digital universe expands and gets more complex, processing, storing, managing, securing, and disposing of the information in it become more complex as well.  

Consider this: Over the next decade, the number of servers (virtual and physical) worldwide will grow by a factor of 10, the amount of information managed by enterprise datacenters will grow by a factor of 50, and the number of files the datacenter will have to deal with will grow by a factor of 75, at least. Meanwhile, the number of IT professionals in the world will grow by less than a factor of 1.5.

As a result, the skills, experience, and resources to manage all these bits of data will become scarcer and more specialized, requiring a new, flexible, and scalable IT infrastructure, extending beyond the enterprise. Today we call it cloud computing.

And while cloud computing accounts for less than 2% of IT spending today, IDC estimates that by 2015 nearly 20% of the information will be "touched" by cloud computing service providers — meaning that somewhere in a byte's journey from originator to disposal it will be stored or processed in a cloud. Perhaps as much as 10% will be maintained in a cloud.  

Much of the current movement to cloud architectures is being enabled by pervasive adoption of virtualization. Last year was the first year in which more virtual servers were shipped than physical servers. IDC estimates that today nearly 10% of the information running through servers is doing so on virtualized systems and expects that number to grow to more than 20% in 2015. This percentage increases along with the size of the organization. Some larger environments today operate with 100% virtualized systems.  
 
Of course, cloud services come in various flavors — public, private, and hybrid. For organizations to offer their own cloud services, they have to do more than just run virtual servers. They must also allow for virtualized storage and networking, self-provisioning, and self-service — and provide information security and billing. Few enterprises are here yet, so the impact of private clouds on the digital universe today is small (see Figure 3). But by 2015, when the virtualized infrastructure is more common, the rate of growth will accelerate.

Big Value from Big Data

Big data is a big dynamic that seemed to appear from nowhere. But in reality, big data isn't new. Instead, it is something that is moving into the mainstream and getting big attention, and for good reason. Big data is being enabled by inexpensive storage, a proliferation of sensor and data capture technology, increasing connections to information via the cloud and virtualized storage infrastructures, and innovative software and analysis tools. Big data is not a "thing" but instead a dynamic/activity that crosses many IT borders. IDC defines it this way:

Big data technologies describe a new generation of technologies and architectures, designed to economically extract value from very large volumes  of a wide variety of data, by enabling high-velocity capture, discovery, and/or analysis.

Big data is a horizontal cross-section of the digital universe and can include transactional data, warehoused data, metadata, and other data residing in ridiculously large files. Media/entertainment, healthcare, and video surveillance are obvious examples of new segments of big data growth. Social media solutions such as Facebook, Foursquare, and Twitter are the newest new data sources. Essentially, they have built systems where consumers (consciously or unconsciously) are providing near continuous streams of data about themselves, and thanks to the "network effect" of successful sites, the total data generated can expand at rapid logarithmic rates.  

It is important to understand that big data is not only about the original content stored or being consumed but also about the information around its consumption. Smartphones are a great illustration of how our mobile devices produce additional data sources that are being captured and that include geographic location, text messages, browsing history, and (thanks to the addition of accelerometers and GPS) even motion or direction (see Figure 4).

Source: http://goo.gl/PxPc2


Source and/or and/or more resources and/or read more: http://goo.gl/JujXk ─ Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at http://goo.gl/JujXk
 World’s data will grow by 50X in next decade, IDC study predicts

June 29, 2011

In 2011, the amount of information created and replicated will surpass 1.8 zettabytes (1.8 trillion gigabytes), growing by a factor of 9 in just five years, according to the fifth annual IDC Digital Universe study released Tuesday.

By 2020 the world will generate 50 times the amount of information and 75 times the number of “information containers” while IT staff to manage it will grow less than 1.5 times.

In the next five years, these files will grow by a factor of 8, while the pool of IT staff available to manage them will grow only slightly.

The IDC study predicts that overall data will grow by 50 times by 2020, and unstructured information — such as files, email and video — will account for 90% of all data created over the next decade.

Source: http://goo.gl/vzakQ

Source and/or and/or more resources and/or read more: http://goo.gl/JujXk ─ Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at http://goo.gl/JujXk
 LSD Alleviates 'Suicide Headaches'

by Kai Kupferschmidt on 27 June 2011

BERLIN—Patients suffering from the agony of cluster headaches will take anything to dull the pain, even LSD, it turns out. Results from a pilot study presented here on Saturday at the International Headache Congress reveal that six patients treated with 2-bromo-LSD, a nonhallucinogenic analog of LSD, showed a significant reduction in cluster headaches per day; some were free of the attacks for weeks or months.

"Some of these patients are still reporting significant relief more than a year after they were treated with the compound," says John Halpern, a psychiatrist at Harvard Medical School in Boston and one of the investigators involved in the study. "Nobody has ever reported these kinds of results."

Cluster headaches, sometimes referred to as "suicide headaches" because of the almost unbearable pain they cause sufferers, usually involve just one side of the face; patients often liken the pain to someone trying to pull their eye out for hours. They can occur in bouts lasting many weeks, with several attacks a day.

"What causes these attacks is still not clear," says Peter Goadsby, a headache expert at the University of California, San Francisco, who is not connected with the research. But recent studies suggest that changes in the structure of the hypothalamus are involved. Because that part of the brain is responsible for, among other things, circadian rhythms, the daily cycle of our body that dictates when we sleep but also regulates body temperature and blood pressure, it could explain the periodicity of attacks and why they seem to occur particularly often around the solstices.

Although there is no cure, patients can sometimes cure the headache by inhaling pure oxygen at the onset of an attack. Other treatments include blocking calcium channels with the drug verapamil—which is used for cardiac arrhythmia—or taking triptans, also used for migraines. Some patients have also reported finding relief in hallucinogenic drugs such as LSD and psilocybin.

Those reports intrigued Torsten Passie, a psychiatrist at the Hannover Medical School in Germany and an expert on LSD. So he, Halpern, and colleagues decided to test 2-bromo-LSD (BOL), which was developed by Sandoz, the Swiss company that discovered the psychedelic effects of LSD and marketed it as a drug for some time, as a kind of placebo compound in LSD trials.

At the conference, Halpern and Passie presented the data of six patients with severe cluster headache who were given BOL once every 5 days for a total of three doses. All patients reported a reduction in frequency of attacks, and five patients reported having no attacks for months afterward.

"There seems to be a long-term prophylactic effect that we cannot explain," Halpern says. The team has since treated a seventh patient with similar results. "Compared to what these headache sufferers currently have available to them, this is quite remarkable. It could lead to a near-cure-like treatment", Halpern says. He and Passie have founded a company called Entheogen Corp. to fund further research and are hoping to start a phase II clinical trial with 50 patients later this year.

Goadsby points to shortcomings in the research, however. "These are just a few patients in a completely unblinded study; you would certainly expect some placebo effect," he says. Indeed, Goadsby has done a double-blind study comparing pure oxygen and air in the treatment of cluster headache. Twenty percent of the patients treated with air, the placebo, reported pain reduction. Because cluster headaches can occur in episodes and then vanish again for months or years, it is also difficult to distinguish a drug's long-term effect from normal attack patterns, Goadsby cautions. "Still," he says, "this is an interesting study, and it certainly warrants further investigation."

Source: http://goo.gl/fmaLw

Source and/or and/or more resources and/or read more: http://goo.gl/JujXk ─ Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at http://goo.gl/JujXk

Blue Brain: Illuminating the Mind

JUNE 6, 2005

Scientists will use the blazingly fast supercomputer to do never-before-possible research into how we think and how mental disorders arise 

On July 1, the Blue Brain computer will wake up, marking "a monumental moment" in the history of brain research, says neuroscientist Henry Markram, founder of the Brain Mind Institute at Switzerland's Ecole Polytechnique Fédérale de Lausanne (EPFL). The event could usher in a new era of scientific discoveries about the workings of the human mind.

The Blue Brain computer is the latest installation of IBM's (IBM ) BlueGene/L system, a radically new approach in supercomputer design. EPFL's machine has a peak speed of some 22.8 teraflops -- meaning it can theoretically spit out 22.8 trillion calculations every second. That blazing speed should put Blue Brain among the world's top 15 supercomputers. (The world champ is the BlueGene system at Lawrence Livermore National Laboratory -- when finished later this year, it will have a peak speed of 367 teraflops.) 

"A UNIQUE FACILITY."  Markram's EPFL team, collaborating with IBM researchers and an online network of brain and computer scientists, will use Blue Brain to create a detailed computer model of the neocortex, the largest and most complex part of the human brain. "That's going to take two to three years," he says. 

Then, with a bigger Blue Brain, he hopes to build a cellular-level model of the entire brain. This may take a decade -- even with IBM's next-generation system, BlueGene/P. Markram can't wait to get his hands on one of these number-crunching beasts. 

BlueGene/P will have faster processors and could ultimately reach petaflops speeds-- quadrillions of calculations per second. "We're planning on a very long-term effort," notes Markram. "We're creating a unique facility for researchers worldwide." Adds Charles Peck, the IBM researcher who leads the Blue Brain effort at IBM's research division in Yorktown Heights, N.Y.: "There's now a tremendous opportunity to do some science that up to this point just hasn't been possible." 

THINKING MYSTERY.  The Blue Brain Project will search for novel insights into how humans think and remember. Plus, by by running accurate simulations of brain processes, "we'll be able to investigate questions about psychiatric disorders and how they arise," Markram says. Scientists believe that autism, schizophrenia, depression, and other psychological problems are caused by defective or malfunctioning circuitry in the brain. 

Parkinson's disease is another target, adds Markram. "There's a group of cells deep down in the mid-brain that produce dopamine, and when these cells begin to die and dopamine production decreases, you get Parkinson's," he explains. "We'll be able to mimic this," creating simulations that should make Blue Brain an invaluable tool for drug-company researchers on the track of treatments or cures for Parkinson's. 

Learning how the brain works has been one of science's great challenges. Researchers still don't have a holistic grasp of how we think. One reason: Most research so far has been conducted with "wet" experiments -- stimulating or dissecting the brains of mice, rats, and other animals. Markram notes that "some 'wet-lab' experiments are incredibly complicated," taking up to three years and costing $1 million. 

With simulations on Blue Brain, he predicts, "we'll be able to do that same work in days, maybe seconds. It's going to be absolutely phenomenal." 

CONSTANTLY CHANGING CIRCUITRY.  Markram first broached the idea of a BlueGene-based collaboration five years ago, right after IBM unveiled the supercomputer system. "Even before that, Henry had been wanting to go down this path of computer simulations," says IBM's Peck. "But only now is it actually feasible." 

That's because the brain is so extraordinarily complex that an enormously powerful computer is required. The brain's physical structure and electrochemical operations are very intricate. Complicating things still further is its constantly changing internal circuitry. "The brain is in a very different state in the morning, when you wake up, than it is at noontime," Markram points out. 

Fifty years ago, he notes, "we believed that memories were somehow hardwired into the brain. But our lab [EPFL's Laboratory of Neural Microcircuitry] has been one of the main propagators of a new theory, in which the brain is incredibly fluid. It's restructuring itself continuously -- self-organizing and reorganizing all the time." 

HUGE SIMULATION.  If brain circuitry is in a constant state of flux, Markram insists that long-term memories can't be permanent, hardwired fixtures. To explain how memories are preserved, he and his colleagues cooked up the "liquid-computing" theory. Validating this concept with Blue Brain, he hints, might point to new types of silicon circuits that perform new and more-complex functions -- which IBM could use to build a revolutionary brain-like computer. 

"That's a possibility," says Tilak Agerwala, a vice-president at IBM Research. "But we're still very far from understanding how the brain works, so it's much too early to know if we should build computers that way." However, the notion already has a fancy moniker: biometaphorical computing. 

For now, Markram sees the BlueGene architecture as the best tool for modeling the brain. Blue Brain has some 8,000 processors, and by mapping one or two simulated brain neurons to each processor, the computer will become a silicon replica of 10,000 neurons. "Then we'll interconnect them with the rules [in software] that we've worked out about how the brain functions," says Markram. 

The result will be a full-fledged model of 10,000 neurons jabbering back and forth -- a simulation 1,000 times larger than any similar model to date. 

FANTASTIC ACCELERATION.  This setup will form the foundation for studying neocortical columns -- the building blocks of the cortex and the part of the brain that differentiates mammals from other animals. Each column is a bundle of networked neurons and is roughly 1/2 millimeter in diameter and 2 millimeters long. That's only about the size of a pinhead, Markram notes. "But packed inside are 50,000 neurons and more than 5 kilometers [3 miles] of wiring," he marvels. 

"The neocortical column is the beginning of intelligence and adaptability," Markram adds. "It marks the jump from reptiles to mammals." When it evolved, it was like Mother Nature had discovered the Pentium chip, he quips -- the circuitry "was so successful that it's just duplicated, with very little variation, from mouse to man. In the human cortex, there are just more cortical columns -- about 1 million." 

Since the neocortical column was first discovered 40 years ago, researchers have been painstakingly unraveling how it helps perform the miracles of thought that enable humans to be creative, inventive, philosophical creatures. "That's been my passion, my mission for 10 years," says Markram. "Now, we know how information is transferred form one neuron to another. We know how they behave -- what they do and whom they talk to. We've actually mapped that out." 

Next, that knowledge will be transferred into a torridly fast silicon simulator. Blue Brain promises a fantastic acceleration in brain research. It could be as dramatic as the leap from chiseling numbers in Sumerian clay tablets 2,500 years ago to crunching them in modern computers. And the Blue Brain Project just might culminate in a new breed of supersmart computers that will make even BlueGene/L seem like a piker. 

Source: http://goo.gl/QySb6


Source and/or and/or more resources and/or read more: http://goo.gl/JujXk ─ Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at http://goo.gl/JujXk

Tuesday, June 28, 2011


Singularity Summit 2011 to be held in New York City Oct. 15-16

June 28, 2011 

Singularity Summit 2011 will be a TED-style two-day event on October 15–16 featuring futurist Ray Kurzweil and Jeopardy! champion Ken Jennings on IBM’s Watson, economist Tyler Cowen on the economic impacts of emerging technologies, and PayPal founder Peter Thiel on innovation and jump-starting the economy.

Other speakers include neuroscientist Christof Koch, MIT cosmologist Max Tegmark, AI researcher Eliezer Yudkowsky, MIT polymath Alexander Wissner-Gross, DARPA challenge winner Riley Crane, Skype founder Jaan Tallinn, television personalities Jason Silva and Casey Pieretti, and robotics professors James McLurnkin and Robin Murphy.

The event will be held at the 92nd Street Y in New York City.

Source: http://goo.gl/6B83w



Source and/or and/or more resources and/or read more: http://goo.gl/JujXk ─ Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at http://goo.gl/JujXk

Diabetes affects more than 300 million worldwide, but life-expectancy has increased

June 28, 2011

Researchers at Imperial College London and the Harvard School of Public Health have found that the number of adults worldwide with diabetes reached 347 million in 2008, more than double the number in 1980.

Seventy per cent of the rise was due to population growth and aging, with the other 30 per cent due to higher prevalence.

Increased life expectancy

Despite this news, the life expectancy of people diagnosed with type 1 diabetes dramatically increased during the course of a long-term, 30-year prospective study, University of Pittsburgh Graduate School of Public Health researchers have found.

The life expectancy for participants diagnosed with type 1 diabetes between 1965 and 1980 was 68.8 years — a 15-year improvement compared to those diagnosed between 1950 and 1964. The 30-year mortality of participants diagnosed with type 1 diabetes from 1965 to 1980 was 11.6 percent — a significant decline from the 35.6 percent 30-year mortality of those diagnosed between 1950 and 1964.

Reversing diabetes with extreme diet

In addition, a team at Newcastle University has discovered that Type 2 diabetes can be reversed by an extreme low-calorie diet alone.

Under close medical supervision, 11 people who had developed diabetes later in life were put on a diet of just 600 calories a day consisting of liquid diet drinks and non-starchy vegetables. After just one week, the team found that the volunteers’ pre-breakfast blood sugar levels had returned to normal, fat levels in the pancreas had returned from an elevated level to normal, and the pancreas regained the normal ability to make insulin.

The researchers followed up on the volunteers three months later. During this time, the volunteers had returned to eating normally but had received advice on portion size and healthy eating.  Of the ten people re-tested, seven remained free of diabetes.

Source: http://goo.gl/FPhFA


Source and/or and/or more resources and/or read more: http://goo.gl/JujXk ─ Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at http://goo.gl/JujXk

Brain rhythm associated with learning linked to running speed

June 28, 2011 

Rhythms in the brain that are associated with learning become stronger as the body moves faster, neurophysicists at the University of California, Los Angeles, have found.

The experiment was performed by measuring electrical signals from hundreds of mice neurons using microwires, the researchers said. Nearly a hundred gigabytes of data was collected every day.

Analysis of the data showed that the gamma rhythm, a fast signal that occurs while concentrating or learning, gradually grew stronger as the mice moved faster.

Does this mean movement or exercise could influence the learning process? The researchers said it is too early to tell.

Source: http://goo.gl/5GmDR


Source and/or and/or more resources and/or read more: http://goo.gl/JujXk ─ Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at http://goo.gl/JujXk

New software advances brain image research

June 28, 2011

A new software program that allows neuroscientists to produce single brain images pulled from hundreds of individual studies has been developed by researchers at the University of Colorado Boulder, trimming weeks and even months from the research process.

The new software can be programmed to comb scientific literature for published articles relevant to a particular topic, and then to extract all of the brain scan images from those articles, the researchers said. Using a statistical process called “meta-analysis,” the researchers are then able to produce a consensus brain activation image reflecting hundreds of studies at a time.

The research team was able to distinguish people who were experiencing physical pain during brain scanning from people who were performing a difficult memory task or viewing emotional pictures, with nearly 80 percent accuracy.

“Because the new approach is entirely automated, it can analyze hundreds of different experimental tasks or mental states nearly instantaneously instead of requiring researchers to spend weeks or months conducting just one analysis,” said Tal Yarkoni.

Source: http://goo.gl/SO5EM


Source and/or and/or more resources and/or read more: http://goo.gl/JujXk ─ Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at http://goo.gl/JujXk

‘Orca ears’ inspire researchers to develop ultrasensitive undersea microphone

June 27, 2011

Imagine a miniature microphone that responds to ocean sounds from 1 to 100kHz (a deep inaudible rumble to ultrasonic sounds) with a dynamic range of 160 dB (a whisper in a quiet library to the sound from 1 ton of TNT exploding 60 feet away) and operates at any depth.

An amazing microphone that does all that — modeled after the extraordinarily acute hearing of orcas — has been developed by Onur Kilic and other researchers at Stanford University.

At the core of the microphone, the researchers fabricated a silicon chip with a thin membrane (diaphragm) about 500 nanometers thick and drilled a grid of tiny holes (about 100 nanometers) in it, to allow water flow into the microphone, keeping the water pressure on each side of the membrane equal, no matter how deep.

They ran a fiberoptic cable into the water-filled microphone, with the end of the cable positioned near the inside surface of the diaphragm. They then shot light from a laser out the end of the cable onto the diaphragm. When the diaphragm was deformed ever so slightly by a sound wave, the intensity of the light reflected back into the cable was altered, which was measured with an optical detector.

The result was a hydrophone that would function at any depth and could detect and measure sound with extreme accuracy. But to be able to capture the full range of volumes they were after (a spread of 160 decibels), they needed not just one diaphragm, but three.

By giving each diaphragm a different diameter, they were able to “tune” each one to maximize its sensitivity to a different part of the range of volumes they wanted to detect.

One was tuned to measure quiet sounds on the library-whisper end of the spectrum, one was attuned more to the loud, TNT explosion end of the range, and the third was tuned to the mid-range volumes.

Since they all measured the exact same signal — just with different degrees of responsiveness — they worked like a single sensor.

Kilic said the uses for this would include ocean surveys and using the ocean as a giant neutrino detection system. He also told me the microphone fibers could be made into an array, which would allow for even greater sensitivity as well as localization. One could create a system that would operate very much like whale’s hearing (ranging over thousands of miles) and even do 3-D imaging.

He couldn’t comment on it, but this technology was funded by Litton Systems Inc., a subsidiary of Northrop-Grumman, and seems perfect for long-distance submarine detection and for communicating signals via sound in the ocean, perhaps covertly over long distances.

I could say more, but I’d have to kill all our readers. Not a good editorial strategy.

Source: http://goo.gl/SizFj



Source and/or and/or more resources and/or read more: http://goo.gl/JujXk ─ Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at http://goo.gl/JujXk

Monday, June 27, 2011


China's high-speed rail: Smooth ride, but bumps ahead

Beijing (CNN) -- As our train raced through the familiar scenery of rural and small-town China on this sunny Monday morning -- alternating between green farmland and red tiled-roofs, my Chinese seatmate proudly pointed to a coffee cup on the tray table.

"Look, it's not moving at all," he said. "Impressive, isn't it?"

An amazingly smooth ride indeed -- considering we were traveling at 300 kilometers (186 miles) an hour, more than double the average speed of America's fastest train.

The shiny CRH380 model we rode serves the new 1,318-kilometer (819-mile) Beijing-Shanghai High-Speed Railway -- completed in just three years -- and cuts journey time between China's political center and commercial hub in half to under five hours.

Small wonder rail authorities had invited the entire foreign press corps for a preview trip a few days before the official launch on June 30.

Normally jaded reporters gushed over the futuristic bullet train with spacious and quiet cabins, as immaculately dressed attendants offered frequent drink and snack service.

"It's the longest high-speed line ever built in a single phase with the highest technological standards in the world," Vice Minister of Railways Hu Yadong told us early this month.

Now the world's second-largest economy, and flush with cash, China has been busy purchasing foreign technologies and constructing new rail lines. It boasts more than 8,300 kilometers (5,100 miles) of high-speed routes, turning a non-existent network to the world's longest in a few short years.

"It makes China more competitive," said Tom Callarman, a transportation professor at China Europe International Business School in Shanghai. "It gives people more options to move where the jobs are, and also separates people from the freight so the freight can move more efficiently."

Callarman, an American, compares the two countries' commitment to high-speed rail as "night and day."

While the White House has earmarked $8 billion for projects for fiscal year 2012, the Chinese government plans to pour over $400 billion into its program in the next five years.

The massive investment and rapid construction have raised public doubts on the new lines' safety record and commercial viability, amid state media reports of empty trains running between inconvenient new stations in less-developed provinces.

The skeptics' voices became louder after the former railway minister -- a champion of high-speed rail -- was sacked for corruption early this year.

"It's not the faster, the better," said Sun Zhang, a railway professor at Tongji University in Shanghai and a long-time railway ministry consultant. "We have to take safety, economics and environmental impact into consideration.

"Strategically we can talk about a great leap forward in the industry, but tactically we have to do things step by step."

Already, Sun says the railway ministry has realized the importance of a diverse network that also includes regular trains and freight lines. He adds that authorities have slowed down some bullet trains -- including the newest route -- to make the service safer and cheaper.

At $85, a roomy second-class seat from Beijing to Shanghai costs less than half of a full-fare economy-class air ticket, almost guaranteeing the new service's popularity in China's richest region.

And although the United States may be no match to China in building fast rail, America still wins hands down in another fast category: Fast food. The breakfast served onboard China's latest high-speed train: Chicken burgers from KFC.

Source: http://goo.gl/Zm5y9



Source and/or and/or more resources and/or read more: http://goo.gl/JujXk ─ Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at http://goo.gl/JujXk