MindJet Webinar: Innovation - Moving Inventions from Idea to Market

On April 6th I did a live webinar at the request of MindJet to discuss how to not only move ideas to market faster but also how to consolidate large amounts of information - all using MindManager and their Catalyst (cloud based) product.  If you are not familiar with it MindJet is a software product deveoped by MindManager that I have been using daily for about nine years. MindJet's visual information maps (mind maps) start with a central theme, and then add branches with ideas, notes, images, tasks, hyperlinks and attachments. I use MindManager maps to capture and organize information, and transform my thoughts and ideas into fine-tuned documents.  The webinar which was about 30 minutes long, explored how I enhanced creative and critical thinking skills among all employees, increased team alignment and individual productivity, and much more.

You can find an archive of it here on the MindJet web site if you want  give it a listen.

DUOMENTIS in the news

DUOMENTIS in the news

My latest venture DUOMENTIS has found its way into the news. Jack Marshall over at ClickZ wrote a nice article

Growing laboratory-engineered miniature human livers

I enjoyed eating cow liver as a kid. I never understood why so many kids thought it was bad. It was and still is one of my favorite foods. Now, one day soon, I might just be able to grow my own livers for snacking anytime I'd like. And the bonus is that if my own liver wears out or fails I might be able to have a surgeon pop a new one in. Well it might not be quite that simple. But in the quest to grow replacement human organs in the lab, livers are no doubt at the top of many a wish list. With its wide range of functions that support almost every organ in the body and no way to compensate for the absence of liver function, the ability to grow a replacement is also the focus of many research efforts. Now, for the first time, researchers have been able to successfully engineer miniature livers in the lab using human liver cells.

The ultimate aim of the research carried out at the Institute for Regenerative Medicine at Wake Forest University Baptist Medical Center is to provide a solution to the shortage of donor livers available for patients who need transplants. Additionally, the laboratory-engineered livers could also be used to test the safety of new drugs.

The livers engineered by the researchers are about an inch in diameter and weigh about 0.2 ounces (5.7 g). Even though the average weight of an adult human liver is around 4.4 pounds (2 kg), to meet to minimum needs of the human body the scientists say an engineered liver would need to weigh about one pound (454 g) because research has shown that human livers functioning at 30 percent of capacity are able to sustain the human body.

“We are excited about the possibilities this research represents, but must stress that we’re at an early stage and many technical hurdles must be overcome before it could benefit patients,” said Shay Soker, Ph.D., professor of regenerative medicine and project director. “Not only must we learn how to grow billions of liver cells at one time in order to engineer livers large enough for patients, but we must determine whether these organs are safe to use in patients.”

How the livers were engineered

To engineer the organs, the scientists took animal livers and treated them with a mild detergent to remove all cells in a process called decellularization. This left only the collagen “skeleton” or support structure which allowed the scientists to replace the original cells with two types of human cells: immature liver cells known as progenitors, and endothelial cells that line blood vessels.

Because the network of vessels remains intact after the decellularization process the researchers were able to introduce the cells into the liver skeleton through a large vessel that feeds a system of smaller vessels in the liver. The liver was then placed in a bioreactor, special equipment that provides a constant flow of nutrients and oxygen throughout the organ.

Flexible biocompatible LEDs for next gen biomedicine

Researchers from the University of Illinois at Urbana-Champaign have created bio-compatible LED arrays that can bend, stretch, and even be implanted under the skin. You can see an example of this in the image as LEDs have been embedded under an animal's skin.
While getting a glowing tattoo would be awesome, the arrays are actually intended for activating drugs, monitoring medical conditions, or performing other biomedical tasks within the body. Down the road, however, they could also be incorporated into consumer goods, robotics, or military/industrial applications.
Many groups have been trying to produce flexible electronic circuits, most of those incorporating new materials such as carbon nanotubes combined with silicon. The U Illinois arrays, by contrast, use the traditional semiconductor gallium arsenide (GaAs) and conventional metals for diodes and detectors.
Last year, by stamping GaAs-based components onto a plastic film, Prof. John Rogers and his team were able to create the array’s underlying circuit. Recently, they added coiled interconnecting metal wires and electronic components, to create a mesh-like grid of LEDs and photodetectors. That array was added to a pre-stretched sheet of rubber, which was then itself encapsulated inside another piece of rubber, this one being bio-compatible and transparent.
The resulting device can be twisted or stretched in any direction, with the electronics remaining unaffected after being repeatedly stretched by up to 75 percent. The coiled wires, which spring back and forth like a telephone cord, are the secret to its flexibility.
Rogers and his associates are now working on commercializing their biocompatible flexible LED array via their startup company, mc10.
The research was recently published in the journal Nature Materials.

watching nanoparticles grow

I have spent a lot of time over the past decade-and-a-half talking about nanotech and nanoparticles. The often unexpected properties of these tiny specks of matter are give them applications in everything from synthetic antibodies to fuel cells to water filters and far beyond.
Recently, for the first time ever, scientists were able to watch the particles grow from their earliest stage of development. Given that the performance of nanoparticles is based on their structure, composition, and size, being able to see how they grow could lead to the development of better growing conditions, and thus better nanotechnology.
The research was carried out by a team of scientists from the Center for Nanoscale Materials, the Advanced Photon Source (both run by US Government's Argonne National Laboratory) and the High Pressure Synergetic Consortium (HPSynC).
The team used highly focused high-energy X-ray diffraction to observe the nanoparticles. Amongst other things, it was noted that the initial chemical reaction often occurred quite quickly, then continued to evolve over time.
“It’s been very difficult to watch these tiny particles be born and grow in the past because traditional techniques require that the sample be in a vacuum and many nanoparticles are grown in a metal-conducting liquid,” said study coauthor Wenge Yang. “We have not been able to see how different conditions affect the particles, much less understand how we can tweak the conditions to get a desired effect.”
HPSynC’s Russell Hemley added, “This study shows the promise of new techniques for probing crystal growth in real time. Our ultimate goal is to use these new methods to track chemical reactions as they occur under a variety of conditions, including variable pressures and temperatures, and to use that knowledge to design and make new materials for energy applications.”
The research was recently published in the journal NANOLetters.

living gardens on bus rooftops

Working in NYC everyday as I do understanding the value of parks and green spaces is obvious. But finding room for green spaces in more and more crowded cities isn’t easy. NYU graduate student  Marco Castro Cosio has hit upon the idea of planting gardens on some previously wasted space found on city streets – the roofs of buses. With New York’s Metropolitan Transportation Authority (MTA) running a fleet of around 4,500 buses, each with a surface area of 340 square feet (31.5 m2), Cosio says that if a garden was grown on the roof of every one, there would be an extra 35 acres of rolling green space in the city. 

It might sound a bit far fetched but Cosio’s Bus Roots idea has managed to take second place in the DesignWala Grand Idea Competition and a prototype has already been installed on the roof of a vehicle dubbed the BioBus. The prototype garden only covers a small area at the rear of the BioBus’s roof and is mostly growing small succulents, but it has been traveling around New York for the last five months and has even ventured as far as Ohio.

Cosio says the purpose of the Bus Roots project is to reclaim forgotten space, increase the quality of life and grow the amount of green spaces in the city. Amongst the benefits of bringing plant life to the city listed by Cosio are mitigation of the urban heat island effect, acoustical and thermal insulation and CO2 absorption – although you’d have to wonder whether the amount of CO2 soaked up by the bus’s rooftop garden is enough to offset the extra fuel the bus will burn through carting the extra soil and plant life around.

If you’re interested in checking out the prototype BioBus, it will be open to the public at the Orpheum Children’s Science Museum in Urbana-Champaign, Illinois on Sunday, October 10 and at the USA Science & Engineering Festival at the National Mall in Washington D.C. from October 23–24.

‘Artificial ovary’ allows human eggs to be matured outside the body

 

In a move that could yield infertility treatments for cancer patients and provide a powerful new means for conducting fertility research, researchers have built an artificial human ovary that can grow oocytes into mature human eggs in the laboratory. The ovary not only provides a living laboratory for investigating fundamental questions about how healthy ovaries work, but also can act as a testbed for seeing how problems, such as exposure to toxins or other chemicals, can disrupt egg maturation and health. It could also allow immature eggs, salvaged and frozen from women facing cancer treatment, to be matured outside the patient in the artificial ovary.
To create the ovary, the researchers at Brown University and Women & Infants Hospital formed honeycombs of theca cells, one of two key types of cells in the ovary, donated by reproductive-age (25-46) patients at the hospital. After the theca cells grew into the honeycomb shape, spherical clumps of donated granulosa cells were inserted into the holes of the honeycomb together with human egg cells, known as oocytes. In a couple days the theca cells enveloped the granulosa and eggs, mimicking a real ovary. In experiments the structure was able to nurture eggs from the “early antral follicle” stage to mature human eggs.
Sandra Carson, professor of obstetrics and gynecology at the Warren Alpert Medical School of Brown University and director of the Division of Reproductive Endocrinology and Infertility at Women & Infants Hospital, said her goal was never to invent an artificial organ, per se, but merely create a research environment in which she could study how theca and granulose cells and oocytes interact. She then heard of the so-called “3D Petri dishes” developed by Jeffrey Morgan that are made of a moldable agarose gel that provides a nurturing template to encourage cells to assemble into specific shapes. The two then teamed up to create the organ, resulting in the first fully functioning tissue to be made using Morgan’s method.
The paper detailing the development of the artificial ovary appears in the Journal of Assisted Reproduction and Genetics.

 

OMMA SOCIAL

I will be speaking on a panel at the OMMA Social conference Thursday, June 17th in NYC with some folks from Foursquare, Nielsen, SCVNGR and Microsoft about "How Mobile Social will Change Commerce"  

The most magical marketing environment for anyone with something to sell would be one that marries the right person, with the right place, with the right product with the right time. But this is no longer a dream. Suddenly, we’re at a point where all of those things can be brought together, with social as the glue connects them. With more and more social activity taking place on mobile, and companies such as Facebook and Google now embracing QR codes, which create a shorthand in which profile data could be read by merchants at the point of sale, the era of in-store customized marketing is almost upon us. What will it look like? And is the early success of companies such as Foursquare indication that portable social profiles are the wave of the future?   

The panel will be moderated by Erik Sass from MediaPost.

Other panelists joining me will be:

Eric Friedman, Director of Client Services, Foursquare
Paul Kultgen, Director Mobile Media and Advertising, Nielsen
Chris Mahl, SVP, Chief Brand Alchemist, SCVNGR
Erin Wilson, Mobile Sales Specialist, Microsoft Advertising

http://bit.ly/OMMA_Social - #OMMASocial
 

Motion Gaming Technology For Everyone

Omek Interactive wants to put you in the game…and in the TV…and in the computer. The Israel based company has developed Shadow SDK, a middlewarepackage that enables 3D gesture technology for all types of home media. With Shadow, third party developers can create realistic video games where your body becomes the controller, or it can be used to create gesture controlled TV/media centers, or computer interfaces. Omek Interactive demoed some great applications fueled by Shadow at Techonomy 2010. Check out them out along with CEO Janine Kutliroff’s presentation in the video below.

It looks like the human computer interface of the future could be the open air. Ive seen some pretty cool gesture systems that only require a camera and a person’s body to control various media devices. The incredible interface from Minority Report is going to arrive in the next few years, gesture TVs are coming to the market soon (“the end of 2010″), and Microsoft’s Project Natal should be available at about the same time. Because Shadow enabled applications can work with video games, it’s often compared to Natal. Both can give you real-time control of an avatar, as you’ll see in the following:

Kutliroff’s speech ends around 5:40 followed by a media room gesture control application, a demonstration of an avatar (7:43), and a pretty neat-looking boxing game (8:43).

Of course one of the big differences between Project Natal and Shadow is that you’ll only ever see Natal on the Xbox or other Microsoft platforms. Shadow might be popping up everywhere. At least, that’s what Kutliroff and Omek seemed to be aiming for. Other companies in the gesture control business are focusing on a single application (Toshiba/Hitachi for TVs and home media, g-speak for computers, and Project Natal for video games). Omek Interactive isn’t married to one particular kind of hardware and they’re definitely trying to court a plurality of application developing firms. While they’ve created some interesting demo games and applications, Kutliroff’s presentation clings to the middleware status. Shadow is, after all, a SDK. Omek is poised to enable third party developers to build the next generation of gesture controlled technologies. Probably in video games, but possibly for TVs and computers as well.

The only question I have is whether the products that would sandwich Shadow (the 3D cameras on one side, and the gesture enabled applications on the other) are actually ready. We’ve seen some depth-perceptive cameras on the market (such as the 3D stereoscopic webcam from Minoru) but they are far from ubiquitous. Likewise, there’s been some good buzz surrounding gesture TVs and Project Natal’s video games but neither is actually on sale yet. This is an emerging market, and while the possibilities for gesture controls are very promising there’s no guarantee they’ll be popular. Omek could be caught as the middleman between two types of products that never get off the ground.

I must admit that part of my skepticism stems from the fact that gesture controls are not my favorite of the technologies contending to be the next major human-computer interface. As fun as it may be to play a movie with the flip of a wrist, or use your entire body to play a virtual boxing match, these applications lack tactile feedback. There’s nothing to hold. Nothing physical to let you know that you’re actually interacting with something. To me, for gesture controls to really succeed they’ll need some sort of haptics. I’d be totally cool with flailing my limbs through the open air if I could actually feel when my virtual self was hitting something.

Still, my personal preferences aside, the entire body monitoring control scheme seems to be grabbing a lot of attention. Omek Interactive is making a great move by racing to become the definitive middleware solution in the field. If the public does become interested in gesture technology, the Shadow SDK could get some major use. It would let companies that are good at making hardware, and companies that are good at making applications (i.e. games) focus on their strengths while Omek knits them together. That’s a smart strategy and a sure way to enable innovation. It will likely take several years before we know whether gesture controls are here to stay, but Omek is certainly a name to watch while we figure it all out.

bipedal humanoid robots will inhabit the moon by 2015

Here I go with another moon-themed post. Seemingly, my son's fascination with our closest neighbor is starting to rub off. My son and I talk a lot about space exploration. And, it's more than 40 years since the first human set foot on the moon. So where are all the robot space explorers? While rovers like those that have been trawling the Martian surface in recent times could properly be called robots, and machines like the legless R2 (seen in the video below) are heading to space, these don't match the classic science fiction image of a bi-pedal humanoid bot that we've all become accustomed to. Now a Japanese space-business group is promising to set things in order by sending a humanoid robot to the moon by 2015.

Japan's Space Oriented Higashiosaka Leading Association (SOHLA) expects to spend an estimated 1 billion yen (US$10.5 million) in getting the robot onto the lunar surface. Named Maido-kun after the satellite launched a aboard a Japan Aerospace Exploration Agency (JAXA) HII-A rocket in 2009, there appears to be no clearly defined mission for the robot (apart from getting there).

It's hoped that Maido-kun will travel to the moon on a JAXA mission planned for around 2015.

Why not stick to wheels? “Humanoid robots are glamorous, and they tend to get people fired up,” said SOHLA board member Noriyuki Yoshida. “We hope to develop a charming robot to fulfill the dream of going to space.”

Achieving the feat would certainly be another feather in the cap of Japan's world-leading robotics industry.

Robots In The Cloud

With the phrase "web 2.0" falling out of vogue, the most exciting new uses of the internet are now all about the cloud, a term for servers invisibly doing smart, fast things for net users who may be on the other side of the world.

But it's not just humans that stand to gain, as a recent corporate acquisition by cloud pioneer Google demonstrates. Google has snapped up British start-up Plink, which has devised a cellphone app that can identify virtually any work of art from a photograph. Plink's app will bolster Google's Goggles service, which uses a cellphone camera to recognise objects or eventranslate text. Unlike most cloud start-ups, Plink sprang from a robotics lab, not a Californian garage. Its story demonstrates how the cloud has as much to offer confused robots as it does humans looking for smarter web apps. 

Spatial memory

 

Mark Cummins and James Philbin of Plink developed the tech while working in Paul Newman's mobile robotics research group and Andrew Zisserman's visual geometry group, both at the University of Oxford. The group is trying to enable robots to explore the cluttered human world alone. Although GPS is enough to understand a city's street layout, free-roaming robots will need to negotiate the little-mapped ins and outs of buildings, street furniture and more.

 

Image-recognition software developed at Oxford has helped their wheeled robots build their own visual maps of the city using cameras, developing a human-like ability to recognise when they have seen something before, even if it's viewed from a different angle or if other nearby objects have moved.

 

 

You are here

 

Plink gives cellphone users access to those algorithms. Photos they take of an artwork are matched against images on a database stored in the cloud, even if they were snapped from a different angle. Although the Oxford team's algorithms originally ran entirely on the robot, Newman is now working on moving the visual maps made by a robot into the cloud, to create a Plink-like service to help other robots navigate, he says. Like a user of Plink, a lost robot would take a photo of its location and send it via the internet to an image-matching server; after matching the photo with its map-linked image bank, the server would tell the robot of any matches that reveal where it is.

 

Newman is also testing that concept using cameras installed in cars. "We can drive around Oxford at up to 50 miles per hour doing place recognition on the road," he says.

 

If image maps from many cities were made into a cloud-like service, any camera-equipped car could look at buildings and other roadside features to tell where it was, and the results would be more accurate than is possible with GPS.

 

Adept users

 

Adept Technologies of Pleasanton, California, the largest US-based manufacturer of industrial robots, is also looking cloud-ward. Some of the firm's robots move and package products in warehouses. With access to a Plink-like image-recognition system they could handle objects never encountered before without reprogramming.

 

"This connection of automation to vast amounts of information will also be important for robots tasked with assisting people beyond the factory walls," says Rush LaSelle, the company's director of global sales. A "carebot" working in a less controlled environment such as a hospital or a disabled person's home, for instance, would have to be able to cope with novel objects and situations.

 

Cellphones, humans and robots all have a lot to gain from a smarter, faster cloud.

With the phrase "web 2.0" falling out of vogue, the most exciting new uses of the internet are now all about the cloud, a term for servers invisibly doing smart, fast things for net users who may be on the other side of the world.

But it's not just humans that stand to gain, as a recent corporate acquisition by cloud pioneer Google demonstrates. Google has snapped up British start-up Plink, which has devised a cellphone app that can identify virtually any work of art from a photograph. Plink's app will bolster Google's Gogglesservice, which uses a cellphone camera to recognise objects or eventranslate text.
Unlike most cloud start-ups, Plink sprang from a robotics lab, not a Californian garage. Its story demonstrates how the cloud has as much to offer confused robots as it does humans looking for smarter web apps.
Spatial memory
Mark Cummins and James Philbin of Plink developed the tech while working in Paul Newman's mobile robotics research group and Andrew Zisserman's visual geometry group, both at the University of Oxford.
The group is trying to enable robots to explore the cluttered human world alone. Although GPS is enough to understand a city's street layout, free-roaming robots will need to negotiate the little-mapped ins and outs of buildings, street furniture and more.
Image-recognition software developed at Oxford has helped their wheeled robots build their own visual maps of the city using cameras, developing a human-like ability to recognise when they have seen something before, even if it's viewed from a different angle or if other nearby objects have moved.
You are here
Plink gives cellphone users access to those algorithms. Photos they take of an artwork are matched against images on a database stored in the cloud, even if they were snapped from a different angle.
Although the Oxford team's algorithms originally ran entirely on the robot, Newman is now working on moving the visual maps made by a robot into the cloud, to create a Plink-like service to help other robots navigate, he says. Like a user of Plink, a lost robot would take a photo of its location and send it via the internet to an image-matching server; after matching the photo with its map-linked image bank, the server would tell the robot of any matches that reveal where it is.
Newman is also testing that concept using cameras installed in cars. "We can drive around Oxford at up to 50 miles per hour doing place recognition on the road," he says.
If image maps from many cities were made into a cloud-like service, any camera-equipped car could look at buildings and other roadside features to tell where it was, and the results would be more accurate than is possible with GPS.
Adept users
Adept Technologies of Pleasanton, California, the largest US-based manufacturer of industrial robots, is also looking cloud-ward. Some of the firm's robots move and package products in warehouses. With access to a Plink-like image-recognition system they could handle objects never encountered before without reprogramming.
"This connection of automation to vast amounts of information will also be important for robots tasked with assisting people beyond the factory walls," says Rush LaSelle, the company's director of global sales. A "carebot" working in a less controlled environment such as a hospital or a disabled person's home, for instance, would have to be able to cope with novel objects and situations.
Cellphones, humans and robots all have a lot to gain from a smarter, faster cloud.

 

Pen + touch Interface

Touch screen interfaces are the gadget design trend-du-jour, but that doesn't mean they do everything elegantly. The finger is simply too blunt for many tasks. Microsoft Research's "Manual Deskterity," attempts to combine the strengths of touch interaction with the precision of a pen.

"Everything, including touch, is best for something and worse for something else," says Ken Hinckley, a research scientist at Microsoft who is involved with the project, which will be presented this week at the ACM Conference on Human Factors in Computing Systems (CHI). The prototype in the video above for Manual Deskterity is a drafting application built for the Microsoft Surface, a tabletop touchscreen. Users can perform typical touch actions, such as zooming in and out and manipulating images, but they can also use a pen to draw or annotate those images.
The interface's most interesting features come out when the two types of interaction are combined. For example, a user can copy an object by holding it with one hand and then dragging the pen across the image, "peeling" off a new image that can be placed elsewhere on the screen. By combining pen and hand, users get access to features such as an exacto knife, a rubber stamp, and brush painting.
 
What Was The Inspiration?
Hinckley says the researchers videotaped users working on visual projects with sketchbooks, scissors, glue, and other typical physical art supplies. They noticed that people tended to hold an image with one hand while making notes about it or doing other work related to it with the other. The researchers decided to incorporate this in their interface--touching an object onscreen with a free hand indicates that the actions performed with the pen relate to that object.
Hinckley acknowledges that the interface includes a lot of tricks that users need to learn. But he thinks this is true of most interfaces. "This idea that people just walk up with an expectation of how a [natural user interface] should work is a myth," he says.
Hinckley believes that natural user interfaces can ease the learning process by engaging muscle memory, rather than forcing users to memorizes sequences of commands or the layout of menus. If the work is successful, Hinckley says it will show how different sorts of input can be used in combination.
Hinckley also thinks it's a mistake to focus on devices that work with touch input alone. He says, "The question is not, 'How do I design for touch?' or 'How do I design for pen?' We should be asking, 'What is the correct division of labor in the interface for pen and touch interactions such that they complement one another?'"
 
What's Next?
The researchers plan to follow up by adapting their interface to work on mobile devices.