Viewing entries in
Robots

watching nanoparticles grow

I have spent a lot of time over the past decade-and-a-half talking about nanotech and nanoparticles. The often unexpected properties of these tiny specks of matter are give them applications in everything from synthetic antibodies to fuel cells to water filters and far beyond.
Recently, for the first time ever, scientists were able to watch the particles grow from their earliest stage of development. Given that the performance of nanoparticles is based on their structure, composition, and size, being able to see how they grow could lead to the development of better growing conditions, and thus better nanotechnology.
The research was carried out by a team of scientists from the Center for Nanoscale Materials, the Advanced Photon Source (both run by US Government's Argonne National Laboratory) and the High Pressure Synergetic Consortium (HPSynC).
The team used highly focused high-energy X-ray diffraction to observe the nanoparticles. Amongst other things, it was noted that the initial chemical reaction often occurred quite quickly, then continued to evolve over time.
“It’s been very difficult to watch these tiny particles be born and grow in the past because traditional techniques require that the sample be in a vacuum and many nanoparticles are grown in a metal-conducting liquid,” said study coauthor Wenge Yang. “We have not been able to see how different conditions affect the particles, much less understand how we can tweak the conditions to get a desired effect.”
HPSynC’s Russell Hemley added, “This study shows the promise of new techniques for probing crystal growth in real time. Our ultimate goal is to use these new methods to track chemical reactions as they occur under a variety of conditions, including variable pressures and temperatures, and to use that knowledge to design and make new materials for energy applications.”
The research was recently published in the journal NANOLetters.

bipedal humanoid robots will inhabit the moon by 2015

Here I go with another moon-themed post. Seemingly, my son's fascination with our closest neighbor is starting to rub off. My son and I talk a lot about space exploration. And, it's more than 40 years since the first human set foot on the moon. So where are all the robot space explorers? While rovers like those that have been trawling the Martian surface in recent times could properly be called robots, and machines like the legless R2 (seen in the video below) are heading to space, these don't match the classic science fiction image of a bi-pedal humanoid bot that we've all become accustomed to. Now a Japanese space-business group is promising to set things in order by sending a humanoid robot to the moon by 2015.

Japan's Space Oriented Higashiosaka Leading Association (SOHLA) expects to spend an estimated 1 billion yen (US$10.5 million) in getting the robot onto the lunar surface. Named Maido-kun after the satellite launched a aboard a Japan Aerospace Exploration Agency (JAXA) HII-A rocket in 2009, there appears to be no clearly defined mission for the robot (apart from getting there).

It's hoped that Maido-kun will travel to the moon on a JAXA mission planned for around 2015.

Why not stick to wheels? “Humanoid robots are glamorous, and they tend to get people fired up,” said SOHLA board member Noriyuki Yoshida. “We hope to develop a charming robot to fulfill the dream of going to space.”

Achieving the feat would certainly be another feather in the cap of Japan's world-leading robotics industry.

Robots In The Cloud

With the phrase "web 2.0" falling out of vogue, the most exciting new uses of the internet are now all about the cloud, a term for servers invisibly doing smart, fast things for net users who may be on the other side of the world.

But it's not just humans that stand to gain, as a recent corporate acquisition by cloud pioneer Google demonstrates. Google has snapped up British start-up Plink, which has devised a cellphone app that can identify virtually any work of art from a photograph. Plink's app will bolster Google's Goggles service, which uses a cellphone camera to recognise objects or eventranslate text. Unlike most cloud start-ups, Plink sprang from a robotics lab, not a Californian garage. Its story demonstrates how the cloud has as much to offer confused robots as it does humans looking for smarter web apps. 

Spatial memory

 

Mark Cummins and James Philbin of Plink developed the tech while working in Paul Newman's mobile robotics research group and Andrew Zisserman's visual geometry group, both at the University of Oxford. The group is trying to enable robots to explore the cluttered human world alone. Although GPS is enough to understand a city's street layout, free-roaming robots will need to negotiate the little-mapped ins and outs of buildings, street furniture and more.

 

Image-recognition software developed at Oxford has helped their wheeled robots build their own visual maps of the city using cameras, developing a human-like ability to recognise when they have seen something before, even if it's viewed from a different angle or if other nearby objects have moved.

 

 

You are here

 

Plink gives cellphone users access to those algorithms. Photos they take of an artwork are matched against images on a database stored in the cloud, even if they were snapped from a different angle. Although the Oxford team's algorithms originally ran entirely on the robot, Newman is now working on moving the visual maps made by a robot into the cloud, to create a Plink-like service to help other robots navigate, he says. Like a user of Plink, a lost robot would take a photo of its location and send it via the internet to an image-matching server; after matching the photo with its map-linked image bank, the server would tell the robot of any matches that reveal where it is.

 

Newman is also testing that concept using cameras installed in cars. "We can drive around Oxford at up to 50 miles per hour doing place recognition on the road," he says.

 

If image maps from many cities were made into a cloud-like service, any camera-equipped car could look at buildings and other roadside features to tell where it was, and the results would be more accurate than is possible with GPS.

 

Adept users

 

Adept Technologies of Pleasanton, California, the largest US-based manufacturer of industrial robots, is also looking cloud-ward. Some of the firm's robots move and package products in warehouses. With access to a Plink-like image-recognition system they could handle objects never encountered before without reprogramming.

 

"This connection of automation to vast amounts of information will also be important for robots tasked with assisting people beyond the factory walls," says Rush LaSelle, the company's director of global sales. A "carebot" working in a less controlled environment such as a hospital or a disabled person's home, for instance, would have to be able to cope with novel objects and situations.

 

Cellphones, humans and robots all have a lot to gain from a smarter, faster cloud.

Mind-controlled prosthetics without brain surgery

Mind-reading is powerful stuff, but what about hand-reading? Intricate, three-dimensional hand motions have been "read" from the brain using nothing but scalp electrodes. The achievement brings closer the prospect of thought-controlled prosthetics that do not require brain surgery.

Electroencephalography (EEG), which measures electrical activity through the scalp, was previously considered too insensitive to relay the neural activity involved in complex movements of the hands. Nevertheless, Trent Bradberry and colleagues at the University of Maryland, College Park, thought the idea worth investigating.

The team used EEG to measure the brain activity of five volunteers as they moved their hands in three dimensions, and also recorded the movement detected by motion sensors attached to the volunteers' hands. They then correlated the two sets of readings to create a mathematical model that converts one into the other.

In additional trials, this model allowed Bradberry's team to use the EEG readings to accurately monitor the speed and position of each participant's hand in three dimensions.

If EEG can, contrary to past expectation, be used to monitor complex hand movements, it might also be used to control a prosthetic arm, Bradberry suggests. EEG is less invasive and less expensive than the implanted electrodes, which have previously been used to control robotic arms and computer cursors by thought alone, he says.

Giving Prosthetic Limbs The Sense Of Touch

When I was a child I dreamt about one day becoming a biomedical engineer. And, somewhere in my mother's attic lies the remnants of that dream in the form of school papers and crayon drawings of the limbs, heads, and torsos that I one day hoped to design, engineer, and implant on human beings and animals. Aside from making me the creepiest kid in the neighborhood this also provided me with a special way of thinking about the interconnections and relationships between machines and people. That dream ultimately evaporated in college when I chose to start monetizing my software development hobby instead of completing my inorganic chemistry studies.

Today however actual biomedical engineers have come one step close to the giving a sense of touch to prosthetics for humans. Existing robotic prostheses have limited motor control, provide no sensory feedback and can be uncomfortable to wear. In an effort to make a prosthesis that moves like a normal hand, researchers at the University of Michigan (U-M) have bioengineered a scaffold that is placed over severed nerve endings like a sleeve and could improve the function of prosthetic hands and possibly restore the sense of touch for injured patients.


To overcome the limitations of existing prostheses, the U-M researchers realized a better nerve interface was needed to control the upper extremity prostheses. So they created what they called an “artificial neuromuscular junction” composed of muscle cells and a nano-sized polymer placed on a biological scaffold. Neuromuscular junctions are the body's own nerve-muscle connections that enable the brain to control muscle movement.

When a hand is amputated, the nerve endings in the arm continue to sprout branches, growing a mass of nerve fibers that send flawed signals back to the brain. The bioengineered scaffold was placed over the severed nerve endings like a sleeve. The muscle cells on the scaffold and in the body bonded and the body's native nerve sprouts fed electrical impulses into the tissue, creating a stable nerve-muscle connection.

In laboratory rats, the bioengineered interface relayed both motor and sensory electrical impulses and created a target for the nerve endings to grow properly. This indicates that the interface may not only improve fine motor control of prostheses, but can also relay sensory perceptions such as touch and temperature back to the brain. Laboratory rats with the interface responded to tickling of feet with appropriate motor signals to move the limb.


The research project, which was funded by the Department of Defense, arose from a need for better prosthetic devices for troops wounded in Afghanistan and Iraq. The DoD and the Army have already provided $4.5 million in grants to support the research. Meanwhile, the University of Michigan research team has submitted a proposal to the Defense Advance Research Project Agency (DARPA) to begin testing the bioengineered interface in humans in three years.

SHAPE SHIFTING ROBOTS

I've written here about robots that use a variety of ways to get around, from caterpillar treads, to wheels, legs, wings and even combustion-driven pistons. But the title of the most interesting method of robot propulsion I’ve come across has to go to the shape-shifting ChemBot from iRobot. The ChemBot, which looks more like the Blob than most people’s preconceived ideas of what a robot should be, moves around by changing its shape in a process its developers call, “jamming skin enabled locomotion,” or JaSEL.

JaSEL is a physical process whereby a material is made to transition from a liquid-like to a solid-like state by increasing its density. The ChemBot achieves this process thanks to its hyper-elastic skin composed of multiple cellular compartments. These compartments are filled with air and loosely-packed particles. When the air is removed, the decrease in pressure constricts the skin and the particles shift slightly to fill the void left by the air, resulting in the solidification of the compartment.

Beneath the ChemBot's jammable skin is an incompressible fluid and an actuator that can vary its volume. Unjamming various compartments of the ChemBot’s skin and inflating the interior actuator causes the Chembot's skin to stretch, changing the shape of the robot. It is this method of controlled inflation that allows the ChemBot to roll around.

It should come as no surprise that the ChemBot is the result of US$3.3 million award from the Defense Advanced Research Projects Agency (DARPA) and the U.S. Army Research Office given to iRobot to “develop a soft, flexible, mobile robot that can identify and maneuver through openings smaller than its actual structural dimensions to perform Department of Defense (DoD) tasks within complex and highly cluttered environments.”

The disturbing video below of the ChemBot in action is as it appeared about a year ago, so it’s anyone’s guess how much more creepy the ChemBot is now. Apparently, it has a slightly different design and its creators are working towards including sensors on its body and even connecting multiple ChemBots.

Moving Towards An Open Singularity

Recently, I had a dialogue with some colleagues (Tina and RJ), about technology and the future. The focus of our discussion was the Metaverse and The Singularity. Although, my colleagues were unfamiliar with these exact terms. I believe the dialog important enough to want to share some thoughts about that discussion and the singularity prior to the Singularity Summit (which is happening in NYC on October 3-4). And I encourage anyone reading this to attend.

 

Yes, this post is long, but worthwhile, if for no other reason than to share the ideas of The Singularity and the Metaverse as well some new thoughts I had on those subjects.

 

So, the conversation with my colleagues when like this (paraphrasing):

 

- "What happens when.. virtual worlds meet geospatial maps of the planet?"

- "When simulations get real and life and business go virtual?"

- "When you use a virtual Earth to navigate the physical Earth, and your avatar becomes your online agent?"

-- "What happens then," I said, "is called the Metaverse."

I recall an observation made by polio vaccine pioneer Dr. Jonas Salk. He said that the most important question we can ask of ourselves is, "are we being good ancestors?"

 

This is a particularly relevant question for those of us that will be attending the Singularity Summit this year. In our work, in our policies, in our choices, in the alternatives that we open and those that we close, are we being good ancestors? Our actions, our lives have consequences, and we must realize that it is incumbent upon us to ask if the consequences we're bringing about are desirable.

 

This question was a big part of the conversation with my colleagues. Although, that is not an easy question to answer, in part because it can be an uncomfortable examination. But this question becomes especially challenging when we recognize that even small choices matter. It's not just the multi-billion dollar projects and unmistakably world-altering ideas that will change the lives of our descendants. Sometimes, perhaps most of the time, profound consequences can arise from the most prosaic of topics.

 

Which is why I'm going to write a bit here about video games.

 

Well, not just video games, but video games and camera phones (which many of my readers know - I happen to know quite a bit about), and Google Earth and the myriad day-to-day technologies that, individually, may attract momentary notice, but in combination, may actually offer us a new way of grappling with the world. And just might, along the way, help to shape the potential for a safe Singularity.

 

In the Metaverse Roadmap Overview the authors sketch out four scenarios of how a combination of forces driving the development of immersive, richly connected information technologies may play out over the next decade. But what has struck me more recently about the roadmap scenarios is that the four worlds could also represent four pathways to a Singularity. Not just in terms of the technologies, but—more importantly—in terms of the social and cultural choices we make while building those technologies.

 

The four metaverse worlds emerged from a relatively commonplace scenario structure. The authors arrayed two spectra of possibility against each other, thereby offering four outcomes. Analysts sometimes refer to this as the "four-box" method, and it's a simple way of forcing yourself to think through different possibilities.

 

This is probably the right spot to insert this notion: scenarios are not predictions, they're provocations. They're ways of describing different future possibilities not to demonstrate what will happen, but to suggest what could happen. They offer a way to test out strategies and assumptions—what would the world look like if we undertook a given action in these four futures?

 

To construct the scenario set the authors selected two themes likely to shape the ways in which the Metaverse unfolds: the spectrum of technologies and applications ranging from augmentation tools that add new capabilities to simulation systems that model new worlds; and the spectrum ranging from intimate technologies, those that focus on identity and the individual, to external technologies, those that provide information about and control over the world around you. These two spectra collide and contrast to produce four scenarios.

 

The first, Virtual Worlds, emerges from the combination of Simulation and Intimate technologies. These are immersive representations of an environment, one where the user has a presence within that reality, typically as an avatar of some sort. Today, this means World of Warcraft, Second Life, PlayStation Home and the like.

 

Over the course of the Virtual Worlds scenario, we'd see the continued growth and increased sophistication of immersive networked environments, allowing more and more people to spend substantial amounts of time engaged in meaningful ways online. The ultimate manifestation of this scenario would be a world in which the vast majority of people spend essentially all of their work and play time in virtual settings, whether because the digital worlds are supremely compelling and seductive, or because the real world has suffered widespread environmental and economic collapse.

 

The next scenario, Mirror Worlds, comes from the intersection of Simulation and Externally-focused technologies. These are information-enhanced virtual models or “reflections” of the physical world, usually embracing maps and geo-locative sensors. Google Earth is probably the canonical present-day version of an early Mirror World.

 

While undoubtedly appealing to many individuals, in my view, the real power of the Mirror World setting falls to institutions and organizations seeking to have a more complete, accurate and nuanced understanding of the world's transactions and underlying systems. The capabilities of Mirror World systems is enhanced by a proliferation of sensors and remote data gathering, giving these distributed information platforms a global context. Geospatial, environmental and economic patterns could be easily represented and analyzed. Undoubtedly, political debates would arise over just who does, and does not, get access to these models and databases.

 

Thirdly, Augmented Reality looks at the collision of Augmentation and External technologies. Such tools would enhance the external physical world for the individual, through the use of location-aware systems and interfaces that process and layer networked information on top of our everyday perceptions.

 

Augmented Reality makes use of the same kinds of distributed information and sensory systems as Mirror Worlds, but does so in a much more granular, personal way. The AR world is much more interested in depth than in flows: the history of a given product on a store shelf; the name of the person waving at you down the street (along with her social network connections and reputation score); the comments and recommendations left by friends at a particular coffee shop, or bar, or bookstore. This world is almost vibrating with information, and is likely to spawn as many efforts to produce viable filtering tools as there are projects to assign and recognize new data sources.

 

Lastly, we have Lifelogging, which brings together Augmentation and Intimate technologies. Here, the systems record and report the states and life histories of objects and users, enhancing observation, recall, and communication. I've sometimes discussed one version of this as the "participatory panopticon."

Here, the observation tools of an Augmented Reality world get turned inward, serving as an adjunct memory. Lifelogging systems are less apt to be attuned to the digital comments left at a bar than to the spoken words of the person at the table next to you. These tools would be used to capture both the practical and the ephemeral, like where you left your car in the lot and what it was that made your spouse laugh so much. Such systems have obvious political implications, such as catching a candidate's gaffe or a bureaucrat's corruption. But they also have significant personal implications: what does the world look like when we know that everything we say or do is likely to be recorded?

 

This underscores a deep concern that crosses the boundaries of all four scenarios: trust.

 

"Trust" encompasses a variety of key issues: protecting privacy and being safely visible; information and transaction security; and, critically, honesty and transparency. It wouldn't take much effort to turn all four of these scenarios into dystopias. The common element of the malevolent versions of these societies would be easy to spot: widely divergent levels of control over and access to information, especially personal information. The ultimate importance of these scenarios isn't just the technologies they describe, but the societies that they create.

 

So what do these tell us about a Singularity?

 

Across the four Metaverse scenarios, we can see a variety of ways in which the addition of an intelligent system would enhance the audience's experience. Dumb non-player characters and repetitive bots in virtual worlds, for example, might be replaced by virtual people essentially indistinguishable from characters controlled by human users. Efforts to make sense of the massive flows of information in a Mirror World setting would be enormously enhanced with the assistance of sophisticated machine analyst. Augmented Reality environments would thrive with truly intelligent agent systems, knowing what to filter and what to emphasize. In a lifelogging world, an intelligent companion in one's mobile or wearable system would be needed in order to figure out how to index and catalog memories in a personally meaningful way; it's likely that such a system would need to learn how to emulate your own thought processes, becoming a virtual shadow.

 

None of these systems would truly need to be self-aware, self-modifying intelligent machines—but in time, each could lead to that point.

 

But if the potential benefits of these scenarist worlds would be enhanced with intelligent information technology, so too would the dangers. Unfortunately, avoiding dystopian outcomes is a challenge that may be trickier than some may expect—and is one with direct implications for all of our hopes and efforts for bringing about a future that would benefit human civilization, not end it.

 

It starts with a basic premise: software is a human construction. That's obvious when considering code written by hand over empty pizza boxes and stacks of paper coffee cups. But even the closest process we have to entirely computer-crafted software—emergent, evolutionary code—still betrays the presence of a human maker: evolutionary algorithms may have produced the final software, and may even have done so in ways that remain opaque to human observers, but the goals of the evolutionary process, and the selection mechanism that drives the digital evolution towards these goals, are quite clearly of human origin.

 

To put it bluntly, software, like all technologies, is inherently political. Even the most disruptive technologies, the innovations and ideas that can utterly transform society, carry with them the legacies of past decisions, the culture and history of the societies that spawned them. Code inevitably reflects the choices, biases and desires of its creators.

 

This will often be unambiguous and visible, as with digital rights management. It can also be subtle, as with operating system routines written to benefit one application over its competitors (I know some of you reading this are old enough to remember "DOS isn't done 'til Lotus won't run"). Sometimes, code may be written to reflect an even more dubious bias, as with the allegations of voting machines intentionally designed to make election-hacking easy for those in the know. Much of the time, however, the inclusion of software elements reflecting the choices, biases and desires of its creators will be utterly unconscious, the result of what the coders deem obviously right.

 

We can imagine parallel examples of the ways in which metaverse technologies could be shaped by deeply-embedded cultural and political forces: the obvious, such as lifelogging systems that know to not record digitally-watermarked background music and television; the subtle, such as augmented reality filters that give added visibility to sponsors, and make competitors harder to see; the malicious, such as mirror world networks that accelerate the rupture between the information haves and have-nots—or, perhaps more correctly, between the users and the used; and, again and again, the unintended-but-consequential, such as virtual world environments that make it impossible to build an avatar that reflects your real or desired appearance, offering only virtual bodies sprung from the fevered imagination of perpetual adolescents.

 

So too with what we today talk about as a "singularity." The degree to which human software engineers actually get their hands dirty with the nuts & bolts of AI code is secondary to the basic condition that humans will guide the technology's development, making the choices as to which characteristics should be encouraged, which should be suppressed or ignored, and which ones signify that "progress" has been made. Whatever the degree to which post-singularity intelligences would be able to reshape their own minds, we have to remember that the first generation will be our creations, built with interests and abilities based upon our choices, biases and desires.

 

This isn't intrinsically bad; emerging digital minds that reflect the interests of their human creators is a lever that gives us a real chance to make sure that a "singularity" ultimately benefits us. But it holds a real risk. Not that people won't know that there's a bias: we've lived long enough with software bugs and so-called "computer errors" to know not to put complete trust in the pronouncements of what may seem to be digital oracles. The risk comes from not being able to see what that bias might be.

 

Many of us rightly worry about what might happen with "Metaverse" systems that analyze our life logs, that monitor our every step and word, that track our behavior online so as to offer us the safest possible society—or best possible spam. Imagine the risks associated with trusting that when the creators of emerging self- aware systems say that they have our best interests in mind, they mean the same thing by that phrase that we do.

 

For me, the solution is clear. Trust depends upon transparency. Transparency, in turn, requires openness.

 

We need an Open Singularity.

 

At minimum, this means expanding the conversation about the shape that a singularity might take beyond a self-selected group of technologists and philosophers. An "open access" singularity, if you will. Ray Kurzweil's books and lectures are a solid first step, but the public discourse around the singularity concept needs to reflect a wider diversity of opinion and perspective.

 

If the singularity is as likely and as globally, utterly transformative as many here believe, it would be profoundly unethical to make it happen without including all of the stakeholders in the process—and we are all stakeholders in the future.

 

World-altering decisions made without taking our vast array of interests into account are intrinsically flawed, likely fatally so. They would become catalysts for conflicts, potentially even the triggers for some of the "existential threats" that may arise from transformative technologies. Moreover, working to bring in diverse interests has to happen as early in the process as possible. Balancing and managing a global diversity of needs won't be easy, but it will be impossible if democratization is thought of as a bolt-on addition at the end.

 

Democracy is a messy process. It requires give-and-take, and an acknowledgement that efficiency is less important than participation.

 

We may not have an answer now as to how to do this, how to democratize the singularity. If this is the case—and I suspect that it is—then we have added work ahead of us. The people who have embraced the possibility of a singularity should be working at least as hard on making possible a global inclusion of interests as they do on making the singularity itself happen. All of the talk of "friendly AI" and "positive singularities" will be meaningless if the only people who get to decide what that means are the few hundred who read and understand this blog posting.

 

My preferred pathway would be to "open source" the singularity, to bring in the eyes and minds of millions of collaborators to examine and co-create the relevant software and models, seeking out flaws and making the code more broadly reflective of a variety of interests. Such a proposal is not without risks. Accidents will happen, and there will always be those few who wish to do others harm. But the same is true in a world of proprietary interests and abundant secrecy, and those are precisely the conditions that can make effective responses to looming disasters difficult. With an open approach, you have millions of people who know how dangerous technologies work, know the risks that they hold, and are committed to helping to detect, defend and respond to crises. That these are, in Bill Joy's term, "knowledge-enabled" dangers means that knowledge also enables our defense; knowledge, in turn, grows faster as it becomes more widespread. This is not simply speculation; we've seen time and again, from digital security to the global response to influenza, that open access to information-laden risks ultimately makes them more manageable.

 

The Metaverse Roadmap offers a glimpse of what the next decade might hold, but does so recognizing that the futures it describes are not end-points, but transitions. The choices we make today about commonplace tools and everyday technologies will shape what's possible, and what's imaginable, with the generations of technologies to come. If the singularity is in fact near, the fundamental tools of information, collaboration and access will be our best hope for making it happen in a way that spreads its benefits and minimizes its dangers—in short, making it happen in a way that lets us be good ancestors.

 

If we're willing to try, we can create a future, a singularity, that's wise, democratic and sustainable—a future that's open. Open as in transparent. Open as in participatory. Open as in available to all. Open as in filled with an abundance of options.

 

The shape of tomorrow remains in our grasp, and will be determined by the choices we make today. Choose wisely.

robot teachers

I have been thinking about how humans learn now and could learn in the future. Recently, various studies have been published including some that document the amazing amount of brain development that happens in infants and later on in childhood.

This is especially relevant to me as my son is just about to be 17 months old. And as I watch him grow and learn. I have been doing some new thinking about education lately. I also do a great deal of work for Education Testing Service. Founded in 1947, ETS develops, administers and scores more than 50 million knowledge metrics or tests annually.

So, here's my prognostication - in the future, more and more of us will learn from social robots, especially kids learning pre-school skills and students of all ages studying a new language.There will be a new science of learning," which brings together recent findings from the fields of psychology, neuroscience, machine learning and education.

The premise for my thinking: We humans are born immature and naturally curious, and become creatures capable of highly complex cultural achievements — such as the ability to build schools and school systems that can teach us how to create computers that mimic our brains.

With a stronger understanding of how this learning happens, scientists are coming up with new principles for human learning, new educational theories and designs for learning environments that better match how we learn best.And, social robots have a potentially growing role in these future learning environments. The mechanisms behind these sophisticated machines apparently complement some of the mechanisms behind human learning.

One such robot, which looks like the head of Albert Einstein, was revealed this week to show facial expressions and react to real human expressions. The researchers who built the strikingly real-looking yet body-less 'bot plan to test it in schools.

 

Machine learning

In the first 5 years of life, our learning is exuberant and "effortless. We are born learning, and adults are driven to teach infants and children. During those years and up to puberty, our brains exhibit "neural plasticity" — it's easier to learn languages, including foreign languages. It's almost magical how we learn a foreign language, what becomes our native tongue, in the first two or three years we're alive.

Magic aside, our early learning is computational.

Children under three and even infants have been found to use statistical thinking, such as frequency distributions and probabilities and covariation, to learn the phonetics of their native tongue and to infer cause-effect relationships in the physical world.

Some of these findings have helped engineers build machines that can learn and develop social skills, such as BabyBot, a baby doll trained to detect human faces.

Meanwhile, our learning is also highly social, so social, in fact, that newborns as young as 42 minutes old have been found to match gestures shown to them, such as someone sticking out her tongue or opening his mouth.

Imitation is a key component to our learning — it's a faster and safer way to learn than just trying to figure something out on our own.

Even as adults, we use imitation when we go to a new setting such as a dinner party or a foreign country, to try and fit in. Of course, for kids, the learning packed into every day can amount to traveling to a foreign country. In this case, they are "visiting" adult culture and learning how to act like the people in our culture, becoming more like us.

If you roll all these human learning features into the field of robotics, there is a somewhat natural overlap — robots are well-suited to imitate us, learn from us, socialize with us and eventually teach us.

Robot teachers

Social robots are being used on an experimental basis already to teach various skills to preschool children, including the names of colors, new vocabulary words and simple songs.

In the future, robots will only be used to teach certain skills, such as acquiring a foreign or new language, possibly in playgroups with children or to individual adults. But robot teachers can be cost-effective compared to the expense of paying a human teacher.

If we can capture the magic of social interaction and pedagogy, what makes social interaction so effective as a vehicle for learning, we may be able to embody some of those tricks in machines, including computer agents, automatic tutors, and robots.

Still, children clearly learn best from other people and playgroups of peers, and I don't see children in the future being taught entirely by robots.

Terrance Sejnowski of the Temporal Dynamics of Learning Center (TDLC) at the University of California at San Diego, is working on using technology to merge the social with the instructional, and bringing it to bear on classrooms to create personalized, individualized teaching tailored to students and tracking their progress.

"By developing a very sophisticated computational model of a child's mind, we can help improve that child's performance," Sejnowski said.

Overall, the hope, in my mind anyway, would be to figure out how to combine the passion and curiosity for learning that children display with formal schooling. There is no reason why curiosity and passion can’t be fanned at school where there are dedicated professionals, teachers, trying to help children learn. Right?

ROBOTS WITH FEELINGS

Humanoid robots are being developed all over the world for all sorts of purposes - but assisting the sick and elderly are two of the most popular applications. The problem is that sick and elderly people are usually confronted with robots having a cold, emotionless aura. This is where a new robot called KOBIAN comes in.

 

KOBIAN, is being produced by Waseda University in Tokyo along with robot venture Tmsuk. They are calling KOBIAN an “emotional humanoid robot”, designed to express a total of seven different emotions including the ability to “cry”, be happy or sad, act surprised and angry, etc. It can also able walk around and move his arms and hands, too.

The robot boasts an “expressive face” that is controlled by motors that can make its eyelids, lips and eyebrows move, resulting in a more human-like “behavior."  It’s not able to move around autonomously yet, but the makers aim at further improving KOBIAN to make it available for use in nursing homes and hospitals.

But for now it is sufficiently creepy - which to me is kind of cool. Check out the video...

Killer Robots In Warfare

They have no fear, they never tire, they are not upset when the soldier next to them gets blown to pieces. Their morale doesn't suffer by having to do, again and again, the jobs known in the military as the Three Ds - dull, dirty and dangerous.

They are military robots and their rapidly increasing numbers and growing sophistication may herald the end of thousands of years of human monopoly on fighting war. "Science fiction is moving to the battlefield. The future is upon us," as Brookings scholar Peter Singer put it to a conference of experts at the U.S. Army War College in Pennsylvania this month.

Singer just published Wired For War - the Robotics Revolution and Conflict in the 21st Century, a book that traces the rise of the machines and predicts that in future wars they will not only play greater roles in executing missions but also in planning them.

Numbers reflect the explosive growth of robotic systems. The U.S. forces that stormed into Iraq in 2003 had no robots on the ground. There were none in Afghanistan either. Now those two wars are fought with the help of an estimated 12,000 ground-based robots and 7,000 unmanned aerial vehicles (UAVs), the technical term for drone, or robotic aircraft.

Ground-based robots in Iraq have saved hundreds of lives in Iraq, defusing improvised explosive devices, which account for more than 40 percent of U.S. casualties. The first armed robot was deployed in Iraq in 2007 and it is as lethal as its acronym is long: Special Weapons Observation Remote Reconnaissance Direct Action System (SWORDS). Its mounted M249 machinegun can hit a target more than 3,000 feet away with pin-point precision.

From the air, the best-known UAV, the Predator, has killed dozens of insurgent leaders - as well as scores of civilians whose death has prompted protests both from Afghanistan and Pakistan.

The Predators are flown by operators sitting in front of television monitors in cubicles at Creech Air Force Base in Nevada, 8,000 miles from Afghanistan and Taliban sanctuaries on the Pakistani side of the border with Afghanistan. The cubicle pilots in Nevada run no physical risks whatever, a novelty for men engaged in war.

TECHNOLOGY RUNS AHEAD OF ETHICS

Reducing risk, and casualties, is at the heart of the drive for more and better robots. Ultimately, that means "fully autonomous engagement without human intervention," according to an Army communication to robot designers. In other words, computer programs, not a remote human operator, would decide when to open fire. What worries some experts is that technology is running ahead of deliberations of ethical and legal questions.

Robotics research and development in the U.S. received a big push from Congress in 2001, when it set two ambitious goals: by 2010, a third of the country's long-range attack aircraft should be unmanned; and by 2015 one third of America's ground combat vehicles. Neither goal is likely to be met but the deadline pushed non-technological considerations to the sidelines.

A recent study prepared for the Office of Naval Research by a team from the California Polytechnic State University said that robot ethics had not received the attention it deserved because of a "rush to market" mentality and the "common misconception" that robots will do only what they have been programmed to do.

"Unfortunately, such a belief is sorely outdated, harking back to the time when computers were simpler and their programs could be written and understood by a single person," the study says. "Now programs with millions of lines of code are written by teams of programmers, none of whom knows the entire program; hence, no individual can predict the effect of a given command with absolute certainty since portions of programs may interact in unexpected, untested ways."

That's what might have happened during an exercise in South Africa in 2007, when a robot anti-aircraft gun sprayed hundreds of rounds of cannon shell around its position, killing nine soldiers and injuring 14.

Beyond isolated accidents, there are deeper problems that have yet to be solved. How do you get a robot to tell an insurgent from an innocent? Can you program the Laws of War and the Rules of Engagement into a robot? Can you imbue a robot with his country's culture? If something goes wrong, resulting in the death of civilians, who will be held responsible?

The robot's manufacturer? The designers? Software programmers? The commanding officer in whose unit the robot operates? Or the U.S. president who in some cases authorises attacks? (Barack Obama has given the green light to a string of Predator strikes into Pakistan).

While the United States has deployed more military robots - on land, in the air and at sea - than any other country, it is not alone in building them. More than 40 countries, including potential adversaries such as China, are working on robotics technology. Which leaves one to wonder how the ability to send large numbers of robots, and fewer soldiers, to war will affect political decisions on force versus diplomacy.

You need to be an optimist to think that political leaders will opt for negotiation over war once combat casualties come home not in flag-decked coffins but in packing crates destined for the robot repair shop.

Unmanned Underwater Vehicle Competition

The University of Maryland, has won the 11th Annual International Autonomous Underwater Vehicle Competition, in San Diego California. The event is organized by the Association for Unmanned Vehicle Systems International and the Office of Naval Research, and challenges universities to design and build an AUV capable of navigating realistic underwater missions.
Twenty-five teams from the US, India, Canada and Japan participated in the AUV competition, which involved dead reckoning approximately 50 feet through the starting gate, pipeline following, buoy docking, tracking and hovering over an acoustic pinger, grabbing an object and surfacing with the object to a floating ring.

Click here to read a PDF of University of Maryland's Team Journal from the event.


Coming second in the competition was the University of Texas at Dallas, followed by École de Technologie Supérieure. A full list of the placings can be found here. The competition also gave out several special awards: the University of Colorado at Boulder won Best New Entry; the Delhi College of Engineering won Most Improved; the University of Wisconsin  won the Tupperware Use Award; the University of Ottawa won Persistence in Adversity; and Norwich University won the Innovation Award.

On August 8, the AUVSI and ONR also held its first International Autonomous Surface Vehicle Student Competition, at San Diego’s 40 foot deep Transducer Evaluation Center Pool. The craft will have to face challenges including passing through a starting gate and steering a steady course, navigating between buoys, detecting and eliminating shore bound threats, docking and recovering a victim. Embry-Riddle University, Florida Atlantic University, École de Technologie Supérieure, the University of Central Florida, the University of Michigan University_of_Michigan , and Villanova University are competing.

The Association for Unmanned Vehicle Systems International has over 1,400 member companies and organizations from 50 countries, making it the world’s largest non-profit organization devoted exclusively to advancing the unmanned systems community.

Facebook
Join the Facebook group for AUVSI Underwater Robot Makers.
Join the Facebook event AUVSI & ONR's 11th International Autonomous Underwater Vehicle Competition.

THE ROBOT WITH A BIOLOGICAL BRAIN

Meet Gordon, probably the world's first robot controlled exclusively by living brain tissue.

Stitched together from cultured rat neurons, Gordon's primitive grey matter was designed at the University of Reading by scientists who unveiled the neuron-powered machine on Wednesday. Their groundbreaking experiments explore the vanishing boundary between natural and artificial intelligence, and could shed light on the fundamental building blocks of memory and learning.

"The purpose is to figure out how memories are actually stored in a biological brain," said Kevin Warwick, a professor at the University of Reading and one of the robot's principle architects.

Observing how the nerve cells cohere into a network as they fire off electrical impulses, he said, may also help scientists combat neurodegenerative diseases that attack the brain such as Alzheimer's and Parkinson's. "If we can understand some of the basics of what is going on in our little model brain, it could have enormous medical spinoffs," he said.

Gordon has a brain composed of 50,000 to 100,000 active neurons. Once removed from rat foetuses and disentangled from each other with an enzyme bath, the specialised nerve cells are laid out in a nutrient-rich medium across an eight-by-eight centimetre (five-by-five inch) array of 60 electrodes. This "multi-electrode array" (MEA) serves as the interface between living tissue and machine, with the brain sending electrical impulses to drive the wheels of the robots, and receiving impulses delivered by sensors reacting to the environment. Because the brain is living tissue, it must be housed in a special temperature-controlled unit -- it communicates with its "body" via a Bluetooth radio link.

The robot has no additional control from a human or computer.

From the very start, the neurons get busy. "Within about 24 hours, they start sending out feelers to each other and making connections," said Warwick. "Within a week we get some spontaneous firings and brain-like activity" similar to what happens in a normal rat -- or human -- brain, he added. But without external stimulation, the brain will wither and die within a couple of months.

"Now we are looking at how best to teach it to behave in certain ways," explained Warwick. To some extent, Gordon learns by itself. When it hits a wall, for example, it gets an electrical stimulation from the robot's sensors. As it confronts similar situations, it learns by habit. To help this process along, the researchers also use different chemicals to reinforce or inhibit the neural pathways that light up during particular actions.

Gordon, in fact, has multiple personalities -- several MEA "brains" that the scientists can dock into the robot. "It's quite funny -- you get differences between the brains," said Warwick. "This one is a bit boisterous and active, while we know another is not going to do what we want it to."

Mainly for ethical reasons, it is unlikely that researchers at Reading or the handful of laboratories around the world exploring the same terrain will be using human neurons any time soon in the same kind of experiments. But rats brain cells are not a bad stand-in: much of the difference between rodent and human intelligence, speculates Warwick, could be attributed to quantity not quality.

Rats brains are composed of about one million neurons, the specialised cells that relay information across the brain via chemicals called neurotransmitters. Humans have 100 billion.

"This is a simplified version of what goes on in the human brain where we can look -- and control -- the basic features in the way that we want. In a human brain, you can't really do that," he says.

Robots Go Medieval

Sure, it's not uncommon to see one robot arm take a break from productivity to engage in some shenanigans potentially fraught with peril, but two robot arms slacking off and wielding weapons? Well, that's cause for some sort of celebration! As you can see in the video, however, whomever was responsible for this madness didn't completely let the arms loose on each other, which I hope means that will be coming in the "home version."

"e-skin" That Can Feel Like A Humans

Japanese researchers say they have developed a rubber that is able to conduct electricity well, paving the way for robots with stretchable "e-skin" that can feel heat and pressure like humans.

The material is the first in the world to solve the problems faced by metals -- which are conductive but do not stretch -- and rubber, which hardly transmits electricity, according to the team at the University of Tokyo.

The new technology is flexible like ordinary rubber but boasts conductivity some 570 times as high as commercially available rubbers filled with carbon particles, said the team led by Takao Someya at the university's School of Engineering.

If used as wiring, the material can make elastic integrated circuits (ICs), which can be stretched to up to 1.7 times their original size and mounted on curved surfaces with no mechanical damage or major change in conductivity.

One application of the material would be artificial skin on robots, said Tsuyoshi Sekitani, a research associate in the team.  "As robots enter our everyday life, they need to have sensors everywhere on their bodies like humans."

"Imagine they bump into babies. Robots need to feel temperatures, heat and pressure like we do to co-exist. Otherwise it would be dangerous," he said. The material itself can be stretched up to 2.3 times the original size but conductivity drops roughly by half at the maximum extension. It can be stretched by 38 percent with no significant change in conductivity -- still a breakthrough considering that metal wires break on strains of one to two percent, the team said.

The material is made by grinding carbon nanotubes, or tube-shaped carbon molecules, with an ionic liquid and adding it to rubber. Carbon nanotubes often bunch up together but the millimetre-long tubes coupled with the ionic liquid can be uniformly dispersed in rubber to realise both high conductivity and flexibility.

Sekitani said the new material could be used on the surface of steering wheels, which would analyse perspiration, body temperature and other data of the driver and judge whether he or she is fit enough to drive. "It could be completely integrated into the normal driving system, making users unaware of using it," he said.

Or it could be used on top of a mattress for bed-ridden people, watching if some parts of the body were under constant pressure and tilting the bed to change the patient's posture to prevent bedsores, Sekitani said. "Objects that come into contact with humans are often not square or flat. We believe interfaces between humans and electronics should be soft," he said.

The material could also give birth to a stretchable display, allowing people to take out a tiny sheet and stretch it to watch television. The team aims to put the elastic conductor to practical use in several years, Sekitani said.

"We can't rule out the possibility of using this in living bodies but we're sticking to using it in electronics," Sekitani added.

Personal Robot Industry To Grow To $15 Billion By 2015

You can't say I didn't tell you first..

A new study by ABI Research predicts that the personal robotics market will be worth $15 billion by 2015. The report examines the consumer market for toy robots like Sony's Aibo and the recently released iSobot, as well as increasingly sophisticated single-function “task” robots such as the Roomba vacuum cleaner and Looj gutter cleaning robot from iRobot.

uploaded-file-75418The ABI “Personal Robotics” study also looks at developments in commercial robotics and software development platforms that will play an important role in the future of the market as operating systems become standardized and advances in commercial robotics flow through to consumer products.

ABI says that the forecast growth in the personal robotics market will see major advances at affordable consumer prices and provide revenue opportunities for a wide variety companies, from small robotics-focused software companies and microcontroller vendors to larger semiconductor vendors and giants like Intel, Microsoft and the major automotive manufacturers.

Commenting on the industry's future, ABI Research principal analyst Philip Solis says: "Some people may spend as much on a multi-task humanoid robot as they do on a car, buying fewer, but more expensive, robots. This scenario will occur well in the future, but as we reach 2015, we can expect to see an increasing use of complex manipulators."

Toyota's New Robot For The Aged

Toyota Motor today unveiled a robot that can play the violin as part of its efforts to develop futuristic machines capable of assisting humans in Japan's greying society.

SGE.HIZ63.061207141639.photo02.photo.jpgThe 1.5-metre-tall (five-foot), two-legged robot wowed onlookers with a faultless rendition of Elgar's Pomp and Circumstance. With 17 joints in its hands and arms, the robot has human-like dexterity that could be applied to helping people in the home or in nursing and medical care, the carmaker said.

Toyota, which already uses industrial robots extensively in its car plants, said it aims to put robots capable of assisting humans into use by the early 2010s.

The new robots come three years after Toyota unveiled a trumpet-playing robot -- its first humanoid machine -- in a bid to catch up with robot technology frontrunners such as Honda Motor Co. and Sony Corp. Makers of robots see big potential for their use in Japan, where the number of elderly people is rapidly growing, causing labour shortages in a country that strictly controls immigration. Japanese are famed for their longevity of life, with more than 30,000 people aged at least 100 years old, a trend attributed to a healthy cuisine and active lifestyle. But the ability to live longer is also presenting a headache as the country has one of the lowest birthrates. Japan's most famous robot is arguably Asimo, an astronaut-looking humanoid developed by Honda which has been hired out as an office servant and has even popped up to offer toasts at Japanese diplomatic functions.

It aims to start trials putting some, including the mobility robot, into practical use in the second half of next year. Further work is also planned to improve the hand and arm flexibility of the violin-playing robot so it can use general purpose tools. Carmakers are also looking to use robot technology to develop more sophisticated cars. "Technologies used to enrich the abilities of robots can also be used to improve the functionality of automobiles," said Watanabe.

My Spouse The Droid

On November 6th, a new book is coming out by the artificial intelligence researcher David Levy that will change the way many people will think about personal relationships in the future.

Levy is a really smart guy. He has worked in the field of Artificial Intelligence since he graduated from St. Andrews University, Scotland, in 1967. He led the team that won the 1997 Loebner Prize For Artificial intelligence competition in New York. The Loebner is a kind of "World Championship," for conversational software. And, Levy, like me, believes that robots will evolve quickly into human companions. His new book makes the pretty obvious prediction that by 2050 humans will be marrying robots.
robot_flowers2.jpg  
The book, "Love and Sex with Robots: The Evolution of Human-Robot Relationships," is really just a commercial version of his thesis which he defended successfully on October 11, 2007, at Maastricht University. It examines how robots will become so human-like -- having intelligent conversations, displaying emotions and responding to human emotions -- that they'll be very much like a new race of people.

"Robots started out in factories making cars. There was no personal interaction," said Levy, who also is an International Chess Master and has been developing computer chess games for years. "Then people built mail cart robots, and then robotic dogs. Now robots are being made to care for the elderly. In the last 20 years, we've been moving toward robots that have more relationships with humans, and it will keep growing toward a more emotional relationship, a more loving one and a sexual one."

Building a sex-bot, or pleasure-bot, is a heck of a lot simpler than building a robot that could be a meaningful human companion. Well, I guess what is meaningful depends on who you ask. But, I believe that the bigger advancement in robotics will come in the form of enabling a robot to carry on an interesting conversation, have self-awareness and emotional capabilities.

"There are already people who are producing fairly crude personalities and fairly crude models of human emotions now," said Levy. "This will be among the harder parts of this process... Human/computer conversation has attracted a lot of research attention since the 1950s, and it hasn't made as much progress as you'd expect in 50 years. But computers are so much more powerful now and memory is so much better... so we'll see software that can have interesting, intelligent conversations. It's really essential that both sides are happy with the conversations they're having."

Robots will be able to have interesting conversations -- not yet at the level of a college graduate but enjoyable -- within 15 years. In 20 or 30 years, however,  you can expect them to carry on sophisticated conversations. The robot's specific knowledge will be up to the owner. People will be able to order a customized companion, whether a friend who enjoys the arts or travel or a spouse.

"There will be different personalities and different likes and dislikes," he said. "When you buy your robot, you'll be able to select what kind of personality it will have. It'll be like ordering something on the Internet. What kind of emotional makeup will it have? How it should look. The size and hair color. The sound of its voice. Whether it's funny, emotional, conservative.

"You could choose a robot that is funny 40% of the time and serious 60% of the time," he added. "If you get fed up with your robot making jokes all the time, you can just download different software or change the settings on it. You'll be able to change the personality of the robot, its interests and its knowledge. If you're a movie buff, you can ask for a robot with a lot of knowledge about movies."

There is great social advantage to having robotic companions. You can fill out a group of friends and shy or lonely people can have the companionship they're lacking. So, in between watching movies with their human companion and walking the dog, will the robots be off leading lives of their own?

Levy said he doesn't think that will happen by 2050, but it could occur by the turn of the next century. "The robot is probably sitting in the corner in your house waiting for you to decide what you'd like to do next... instead of out living a life of its own," he added. "In this time frame anyway, robots will be there when we need them, as we need them."

That, however, doesn't mean they won't become integrated into the family. In terms of how much time people spend with their robots and how attached they become to them, Levy said robots definitely will become family members. "By mid-century, I don't think the difference between robots and humans will be any more than the difference between people who live in Maine and people who live in the bayou of Louisiana," he noted. "People will be surprised to know that robots will have emotions like ours and they'll be sensitive to our emotions and needs."  

So what do researchers need to get robotics to this advanced level? First, they'll need much more powerful computer hardware that can handle the complex and computational-heavy applications that will be needed to design and run conversational capabilities, along with emotions and more advanced artificial intelligence. Once the hardware and software needs are in place, advances in robotics will quickly begin to multiply.

This is the exponential growth I often talk about. I cover off on a lot of this material in the last lecture I gave in October at the School of Communications at Temple University. I posted it and it's titled "Persuading Machines : Marketing Communications for Non-Human Intelligence" and you can check it out here if you are interested.

Also, if you want, you can click here to buy "Love and Sex with Robots: The Evolution of Human-Robot Relationships," on Amazon.

Dashboard Droids

The designers over at Nissan must share my love of R2-D2 as is evidenced by their Pivo2 concept car. Check out the following video by Martyn Williams of IDG News Service:

The Pivo2 is a vehicle that comes equipt with a dashboard-based robot, or as I am certian they will be known to be loved - the "dashbot." The cute droid is there for a very good reason - lightening up the mood of those in the car. Helping to ensure that the driver is in as good mood as possible most of the time actually has some value it utrns out. Known as Pivo-kun, this dashbot was developed as research has shown that happy drivers have drastically lower accident rates compared to depressed ones.
 
"We have data that happy drivers' accident rates are drastically lower than depressed ones, so this robot stays there to make sure the driver is happy always," said Masato Inoue, chief designer at Nissan's exploratory design group, in an interview at the Tokyo Motor Show. "This guides the driver and sometimes cheers up the driver. For example, if the driver is irritated it might say 'Hey, you look somehow angry. Why? Please calm down.'" Go figure.
 
Hopefully the Pivo-kun's repertoire of kock-knock jokes can be dynamically updated. Check out Nissan's web site for the Pivo2 here.