Viewing entries in
Biotech

Growing laboratory-engineered miniature human livers

I enjoyed eating cow liver as a kid. I never understood why so many kids thought it was bad. It was and still is one of my favorite foods. Now, one day soon, I might just be able to grow my own livers for snacking anytime I'd like. And the bonus is that if my own liver wears out or fails I might be able to have a surgeon pop a new one in. Well it might not be quite that simple. But in the quest to grow replacement human organs in the lab, livers are no doubt at the top of many a wish list. With its wide range of functions that support almost every organ in the body and no way to compensate for the absence of liver function, the ability to grow a replacement is also the focus of many research efforts. Now, for the first time, researchers have been able to successfully engineer miniature livers in the lab using human liver cells.

The ultimate aim of the research carried out at the Institute for Regenerative Medicine at Wake Forest University Baptist Medical Center is to provide a solution to the shortage of donor livers available for patients who need transplants. Additionally, the laboratory-engineered livers could also be used to test the safety of new drugs.

The livers engineered by the researchers are about an inch in diameter and weigh about 0.2 ounces (5.7 g). Even though the average weight of an adult human liver is around 4.4 pounds (2 kg), to meet to minimum needs of the human body the scientists say an engineered liver would need to weigh about one pound (454 g) because research has shown that human livers functioning at 30 percent of capacity are able to sustain the human body.

“We are excited about the possibilities this research represents, but must stress that we’re at an early stage and many technical hurdles must be overcome before it could benefit patients,” said Shay Soker, Ph.D., professor of regenerative medicine and project director. “Not only must we learn how to grow billions of liver cells at one time in order to engineer livers large enough for patients, but we must determine whether these organs are safe to use in patients.”

How the livers were engineered

To engineer the organs, the scientists took animal livers and treated them with a mild detergent to remove all cells in a process called decellularization. This left only the collagen “skeleton” or support structure which allowed the scientists to replace the original cells with two types of human cells: immature liver cells known as progenitors, and endothelial cells that line blood vessels.

Because the network of vessels remains intact after the decellularization process the researchers were able to introduce the cells into the liver skeleton through a large vessel that feeds a system of smaller vessels in the liver. The liver was then placed in a bioreactor, special equipment that provides a constant flow of nutrients and oxygen throughout the organ.

Flexible biocompatible LEDs for next gen biomedicine

Researchers from the University of Illinois at Urbana-Champaign have created bio-compatible LED arrays that can bend, stretch, and even be implanted under the skin. You can see an example of this in the image as LEDs have been embedded under an animal's skin.
While getting a glowing tattoo would be awesome, the arrays are actually intended for activating drugs, monitoring medical conditions, or performing other biomedical tasks within the body. Down the road, however, they could also be incorporated into consumer goods, robotics, or military/industrial applications.
Many groups have been trying to produce flexible electronic circuits, most of those incorporating new materials such as carbon nanotubes combined with silicon. The U Illinois arrays, by contrast, use the traditional semiconductor gallium arsenide (GaAs) and conventional metals for diodes and detectors.
Last year, by stamping GaAs-based components onto a plastic film, Prof. John Rogers and his team were able to create the array’s underlying circuit. Recently, they added coiled interconnecting metal wires and electronic components, to create a mesh-like grid of LEDs and photodetectors. That array was added to a pre-stretched sheet of rubber, which was then itself encapsulated inside another piece of rubber, this one being bio-compatible and transparent.
The resulting device can be twisted or stretched in any direction, with the electronics remaining unaffected after being repeatedly stretched by up to 75 percent. The coiled wires, which spring back and forth like a telephone cord, are the secret to its flexibility.
Rogers and his associates are now working on commercializing their biocompatible flexible LED array via their startup company, mc10.
The research was recently published in the journal Nature Materials.

watching nanoparticles grow

I have spent a lot of time over the past decade-and-a-half talking about nanotech and nanoparticles. The often unexpected properties of these tiny specks of matter are give them applications in everything from synthetic antibodies to fuel cells to water filters and far beyond.
Recently, for the first time ever, scientists were able to watch the particles grow from their earliest stage of development. Given that the performance of nanoparticles is based on their structure, composition, and size, being able to see how they grow could lead to the development of better growing conditions, and thus better nanotechnology.
The research was carried out by a team of scientists from the Center for Nanoscale Materials, the Advanced Photon Source (both run by US Government's Argonne National Laboratory) and the High Pressure Synergetic Consortium (HPSynC).
The team used highly focused high-energy X-ray diffraction to observe the nanoparticles. Amongst other things, it was noted that the initial chemical reaction often occurred quite quickly, then continued to evolve over time.
“It’s been very difficult to watch these tiny particles be born and grow in the past because traditional techniques require that the sample be in a vacuum and many nanoparticles are grown in a metal-conducting liquid,” said study coauthor Wenge Yang. “We have not been able to see how different conditions affect the particles, much less understand how we can tweak the conditions to get a desired effect.”
HPSynC’s Russell Hemley added, “This study shows the promise of new techniques for probing crystal growth in real time. Our ultimate goal is to use these new methods to track chemical reactions as they occur under a variety of conditions, including variable pressures and temperatures, and to use that knowledge to design and make new materials for energy applications.”
The research was recently published in the journal NANOLetters.

‘Artificial ovary’ allows human eggs to be matured outside the body

 

In a move that could yield infertility treatments for cancer patients and provide a powerful new means for conducting fertility research, researchers have built an artificial human ovary that can grow oocytes into mature human eggs in the laboratory. The ovary not only provides a living laboratory for investigating fundamental questions about how healthy ovaries work, but also can act as a testbed for seeing how problems, such as exposure to toxins or other chemicals, can disrupt egg maturation and health. It could also allow immature eggs, salvaged and frozen from women facing cancer treatment, to be matured outside the patient in the artificial ovary.
To create the ovary, the researchers at Brown University and Women & Infants Hospital formed honeycombs of theca cells, one of two key types of cells in the ovary, donated by reproductive-age (25-46) patients at the hospital. After the theca cells grew into the honeycomb shape, spherical clumps of donated granulosa cells were inserted into the holes of the honeycomb together with human egg cells, known as oocytes. In a couple days the theca cells enveloped the granulosa and eggs, mimicking a real ovary. In experiments the structure was able to nurture eggs from the “early antral follicle” stage to mature human eggs.
Sandra Carson, professor of obstetrics and gynecology at the Warren Alpert Medical School of Brown University and director of the Division of Reproductive Endocrinology and Infertility at Women & Infants Hospital, said her goal was never to invent an artificial organ, per se, but merely create a research environment in which she could study how theca and granulose cells and oocytes interact. She then heard of the so-called “3D Petri dishes” developed by Jeffrey Morgan that are made of a moldable agarose gel that provides a nurturing template to encourage cells to assemble into specific shapes. The two then teamed up to create the organ, resulting in the first fully functioning tissue to be made using Morgan’s method.
The paper detailing the development of the artificial ovary appears in the Journal of Assisted Reproduction and Genetics.

 

Moving Towards An Open Singularity

Recently, I had a dialogue with some colleagues (Tina and RJ), about technology and the future. The focus of our discussion was the Metaverse and The Singularity. Although, my colleagues were unfamiliar with these exact terms. I believe the dialog important enough to want to share some thoughts about that discussion and the singularity prior to the Singularity Summit (which is happening in NYC on October 3-4). And I encourage anyone reading this to attend.

 

Yes, this post is long, but worthwhile, if for no other reason than to share the ideas of The Singularity and the Metaverse as well some new thoughts I had on those subjects.

 

So, the conversation with my colleagues when like this (paraphrasing):

 

- "What happens when.. virtual worlds meet geospatial maps of the planet?"

- "When simulations get real and life and business go virtual?"

- "When you use a virtual Earth to navigate the physical Earth, and your avatar becomes your online agent?"

-- "What happens then," I said, "is called the Metaverse."

I recall an observation made by polio vaccine pioneer Dr. Jonas Salk. He said that the most important question we can ask of ourselves is, "are we being good ancestors?"

 

This is a particularly relevant question for those of us that will be attending the Singularity Summit this year. In our work, in our policies, in our choices, in the alternatives that we open and those that we close, are we being good ancestors? Our actions, our lives have consequences, and we must realize that it is incumbent upon us to ask if the consequences we're bringing about are desirable.

 

This question was a big part of the conversation with my colleagues. Although, that is not an easy question to answer, in part because it can be an uncomfortable examination. But this question becomes especially challenging when we recognize that even small choices matter. It's not just the multi-billion dollar projects and unmistakably world-altering ideas that will change the lives of our descendants. Sometimes, perhaps most of the time, profound consequences can arise from the most prosaic of topics.

 

Which is why I'm going to write a bit here about video games.

 

Well, not just video games, but video games and camera phones (which many of my readers know - I happen to know quite a bit about), and Google Earth and the myriad day-to-day technologies that, individually, may attract momentary notice, but in combination, may actually offer us a new way of grappling with the world. And just might, along the way, help to shape the potential for a safe Singularity.

 

In the Metaverse Roadmap Overview the authors sketch out four scenarios of how a combination of forces driving the development of immersive, richly connected information technologies may play out over the next decade. But what has struck me more recently about the roadmap scenarios is that the four worlds could also represent four pathways to a Singularity. Not just in terms of the technologies, but—more importantly—in terms of the social and cultural choices we make while building those technologies.

 

The four metaverse worlds emerged from a relatively commonplace scenario structure. The authors arrayed two spectra of possibility against each other, thereby offering four outcomes. Analysts sometimes refer to this as the "four-box" method, and it's a simple way of forcing yourself to think through different possibilities.

 

This is probably the right spot to insert this notion: scenarios are not predictions, they're provocations. They're ways of describing different future possibilities not to demonstrate what will happen, but to suggest what could happen. They offer a way to test out strategies and assumptions—what would the world look like if we undertook a given action in these four futures?

 

To construct the scenario set the authors selected two themes likely to shape the ways in which the Metaverse unfolds: the spectrum of technologies and applications ranging from augmentation tools that add new capabilities to simulation systems that model new worlds; and the spectrum ranging from intimate technologies, those that focus on identity and the individual, to external technologies, those that provide information about and control over the world around you. These two spectra collide and contrast to produce four scenarios.

 

The first, Virtual Worlds, emerges from the combination of Simulation and Intimate technologies. These are immersive representations of an environment, one where the user has a presence within that reality, typically as an avatar of some sort. Today, this means World of Warcraft, Second Life, PlayStation Home and the like.

 

Over the course of the Virtual Worlds scenario, we'd see the continued growth and increased sophistication of immersive networked environments, allowing more and more people to spend substantial amounts of time engaged in meaningful ways online. The ultimate manifestation of this scenario would be a world in which the vast majority of people spend essentially all of their work and play time in virtual settings, whether because the digital worlds are supremely compelling and seductive, or because the real world has suffered widespread environmental and economic collapse.

 

The next scenario, Mirror Worlds, comes from the intersection of Simulation and Externally-focused technologies. These are information-enhanced virtual models or “reflections” of the physical world, usually embracing maps and geo-locative sensors. Google Earth is probably the canonical present-day version of an early Mirror World.

 

While undoubtedly appealing to many individuals, in my view, the real power of the Mirror World setting falls to institutions and organizations seeking to have a more complete, accurate and nuanced understanding of the world's transactions and underlying systems. The capabilities of Mirror World systems is enhanced by a proliferation of sensors and remote data gathering, giving these distributed information platforms a global context. Geospatial, environmental and economic patterns could be easily represented and analyzed. Undoubtedly, political debates would arise over just who does, and does not, get access to these models and databases.

 

Thirdly, Augmented Reality looks at the collision of Augmentation and External technologies. Such tools would enhance the external physical world for the individual, through the use of location-aware systems and interfaces that process and layer networked information on top of our everyday perceptions.

 

Augmented Reality makes use of the same kinds of distributed information and sensory systems as Mirror Worlds, but does so in a much more granular, personal way. The AR world is much more interested in depth than in flows: the history of a given product on a store shelf; the name of the person waving at you down the street (along with her social network connections and reputation score); the comments and recommendations left by friends at a particular coffee shop, or bar, or bookstore. This world is almost vibrating with information, and is likely to spawn as many efforts to produce viable filtering tools as there are projects to assign and recognize new data sources.

 

Lastly, we have Lifelogging, which brings together Augmentation and Intimate technologies. Here, the systems record and report the states and life histories of objects and users, enhancing observation, recall, and communication. I've sometimes discussed one version of this as the "participatory panopticon."

Here, the observation tools of an Augmented Reality world get turned inward, serving as an adjunct memory. Lifelogging systems are less apt to be attuned to the digital comments left at a bar than to the spoken words of the person at the table next to you. These tools would be used to capture both the practical and the ephemeral, like where you left your car in the lot and what it was that made your spouse laugh so much. Such systems have obvious political implications, such as catching a candidate's gaffe or a bureaucrat's corruption. But they also have significant personal implications: what does the world look like when we know that everything we say or do is likely to be recorded?

 

This underscores a deep concern that crosses the boundaries of all four scenarios: trust.

 

"Trust" encompasses a variety of key issues: protecting privacy and being safely visible; information and transaction security; and, critically, honesty and transparency. It wouldn't take much effort to turn all four of these scenarios into dystopias. The common element of the malevolent versions of these societies would be easy to spot: widely divergent levels of control over and access to information, especially personal information. The ultimate importance of these scenarios isn't just the technologies they describe, but the societies that they create.

 

So what do these tell us about a Singularity?

 

Across the four Metaverse scenarios, we can see a variety of ways in which the addition of an intelligent system would enhance the audience's experience. Dumb non-player characters and repetitive bots in virtual worlds, for example, might be replaced by virtual people essentially indistinguishable from characters controlled by human users. Efforts to make sense of the massive flows of information in a Mirror World setting would be enormously enhanced with the assistance of sophisticated machine analyst. Augmented Reality environments would thrive with truly intelligent agent systems, knowing what to filter and what to emphasize. In a lifelogging world, an intelligent companion in one's mobile or wearable system would be needed in order to figure out how to index and catalog memories in a personally meaningful way; it's likely that such a system would need to learn how to emulate your own thought processes, becoming a virtual shadow.

 

None of these systems would truly need to be self-aware, self-modifying intelligent machines—but in time, each could lead to that point.

 

But if the potential benefits of these scenarist worlds would be enhanced with intelligent information technology, so too would the dangers. Unfortunately, avoiding dystopian outcomes is a challenge that may be trickier than some may expect—and is one with direct implications for all of our hopes and efforts for bringing about a future that would benefit human civilization, not end it.

 

It starts with a basic premise: software is a human construction. That's obvious when considering code written by hand over empty pizza boxes and stacks of paper coffee cups. But even the closest process we have to entirely computer-crafted software—emergent, evolutionary code—still betrays the presence of a human maker: evolutionary algorithms may have produced the final software, and may even have done so in ways that remain opaque to human observers, but the goals of the evolutionary process, and the selection mechanism that drives the digital evolution towards these goals, are quite clearly of human origin.

 

To put it bluntly, software, like all technologies, is inherently political. Even the most disruptive technologies, the innovations and ideas that can utterly transform society, carry with them the legacies of past decisions, the culture and history of the societies that spawned them. Code inevitably reflects the choices, biases and desires of its creators.

 

This will often be unambiguous and visible, as with digital rights management. It can also be subtle, as with operating system routines written to benefit one application over its competitors (I know some of you reading this are old enough to remember "DOS isn't done 'til Lotus won't run"). Sometimes, code may be written to reflect an even more dubious bias, as with the allegations of voting machines intentionally designed to make election-hacking easy for those in the know. Much of the time, however, the inclusion of software elements reflecting the choices, biases and desires of its creators will be utterly unconscious, the result of what the coders deem obviously right.

 

We can imagine parallel examples of the ways in which metaverse technologies could be shaped by deeply-embedded cultural and political forces: the obvious, such as lifelogging systems that know to not record digitally-watermarked background music and television; the subtle, such as augmented reality filters that give added visibility to sponsors, and make competitors harder to see; the malicious, such as mirror world networks that accelerate the rupture between the information haves and have-nots—or, perhaps more correctly, between the users and the used; and, again and again, the unintended-but-consequential, such as virtual world environments that make it impossible to build an avatar that reflects your real or desired appearance, offering only virtual bodies sprung from the fevered imagination of perpetual adolescents.

 

So too with what we today talk about as a "singularity." The degree to which human software engineers actually get their hands dirty with the nuts & bolts of AI code is secondary to the basic condition that humans will guide the technology's development, making the choices as to which characteristics should be encouraged, which should be suppressed or ignored, and which ones signify that "progress" has been made. Whatever the degree to which post-singularity intelligences would be able to reshape their own minds, we have to remember that the first generation will be our creations, built with interests and abilities based upon our choices, biases and desires.

 

This isn't intrinsically bad; emerging digital minds that reflect the interests of their human creators is a lever that gives us a real chance to make sure that a "singularity" ultimately benefits us. But it holds a real risk. Not that people won't know that there's a bias: we've lived long enough with software bugs and so-called "computer errors" to know not to put complete trust in the pronouncements of what may seem to be digital oracles. The risk comes from not being able to see what that bias might be.

 

Many of us rightly worry about what might happen with "Metaverse" systems that analyze our life logs, that monitor our every step and word, that track our behavior online so as to offer us the safest possible society—or best possible spam. Imagine the risks associated with trusting that when the creators of emerging self- aware systems say that they have our best interests in mind, they mean the same thing by that phrase that we do.

 

For me, the solution is clear. Trust depends upon transparency. Transparency, in turn, requires openness.

 

We need an Open Singularity.

 

At minimum, this means expanding the conversation about the shape that a singularity might take beyond a self-selected group of technologists and philosophers. An "open access" singularity, if you will. Ray Kurzweil's books and lectures are a solid first step, but the public discourse around the singularity concept needs to reflect a wider diversity of opinion and perspective.

 

If the singularity is as likely and as globally, utterly transformative as many here believe, it would be profoundly unethical to make it happen without including all of the stakeholders in the process—and we are all stakeholders in the future.

 

World-altering decisions made without taking our vast array of interests into account are intrinsically flawed, likely fatally so. They would become catalysts for conflicts, potentially even the triggers for some of the "existential threats" that may arise from transformative technologies. Moreover, working to bring in diverse interests has to happen as early in the process as possible. Balancing and managing a global diversity of needs won't be easy, but it will be impossible if democratization is thought of as a bolt-on addition at the end.

 

Democracy is a messy process. It requires give-and-take, and an acknowledgement that efficiency is less important than participation.

 

We may not have an answer now as to how to do this, how to democratize the singularity. If this is the case—and I suspect that it is—then we have added work ahead of us. The people who have embraced the possibility of a singularity should be working at least as hard on making possible a global inclusion of interests as they do on making the singularity itself happen. All of the talk of "friendly AI" and "positive singularities" will be meaningless if the only people who get to decide what that means are the few hundred who read and understand this blog posting.

 

My preferred pathway would be to "open source" the singularity, to bring in the eyes and minds of millions of collaborators to examine and co-create the relevant software and models, seeking out flaws and making the code more broadly reflective of a variety of interests. Such a proposal is not without risks. Accidents will happen, and there will always be those few who wish to do others harm. But the same is true in a world of proprietary interests and abundant secrecy, and those are precisely the conditions that can make effective responses to looming disasters difficult. With an open approach, you have millions of people who know how dangerous technologies work, know the risks that they hold, and are committed to helping to detect, defend and respond to crises. That these are, in Bill Joy's term, "knowledge-enabled" dangers means that knowledge also enables our defense; knowledge, in turn, grows faster as it becomes more widespread. This is not simply speculation; we've seen time and again, from digital security to the global response to influenza, that open access to information-laden risks ultimately makes them more manageable.

 

The Metaverse Roadmap offers a glimpse of what the next decade might hold, but does so recognizing that the futures it describes are not end-points, but transitions. The choices we make today about commonplace tools and everyday technologies will shape what's possible, and what's imaginable, with the generations of technologies to come. If the singularity is in fact near, the fundamental tools of information, collaboration and access will be our best hope for making it happen in a way that spreads its benefits and minimizes its dangers—in short, making it happen in a way that lets us be good ancestors.

 

If we're willing to try, we can create a future, a singularity, that's wise, democratic and sustainable—a future that's open. Open as in transparent. Open as in participatory. Open as in available to all. Open as in filled with an abundance of options.

 

The shape of tomorrow remains in our grasp, and will be determined by the choices we make today. Choose wisely.

a real-time view of human chemical messenger system

As a kid I wanted to grow up and become a biomedical engineer. I think I was probably mostly inspired by Lee Majors portrayal of The Six Million Dollar Man. I thought it would be amazingly cool to build robotic body parts that could be attached to people. In essence upgrading them to uber-beings providing us with super-strength and other amazing abilities. Ultimately, it was my inability to tolerate inorganic chemistry classes that dashed those dreams and sent me instead deep into the worlds of business and computer science.

Today, I wonder for my son just how far our understanding of life itself will extend by the time he is ready to go to university in the next couple of decades. Earlier this week, U.K. researchers announced the development of a technology that enables the real-time viewing of microscopic activity within the body’s chemical messenger system. The researchers first created novel drug molecules  which have “fluorescent labels” attached, then using fluorescence  correlation spectroscopy, the molecules can be followed under a highly sensitive microscope as they bind to receptors, glowing all the while under a laser beam … all in real time at the single molecule level. Truly remarkable!

The laser technology has helped to attract £1.3 million from the MRC (Medical Research Council) for a five-year project that will offer a new insight into the tiny world of activity taking place within single cells and could contribute to the design of new drugs to treat human diseases such as asthma  and arthritis with fewer side effects.

The team, involving scientists from the University of Nottingham’s Schools of Biomedical Science (Professor Steve Hill and Dr Steve Briddon) and Pharmacy (Dr Barrie Kellam), is concentrating on a type of specialised docking site (receptor) on the surface of a cell that recognises and responds to a natural chemical within the body called adenosine.

These A3-adenosine receptors work within the body by binding with proteins to cause a response within cells and are found in very tiny and highly specialised area of a cell membrane called microdomains. Microdomains contain a collection of different molecules that are involved in telling the cell how to respond to drugs or hormones.

It is believed that these receptors play an important role in inflammation within the body and knowing more about how they operate could inform the future development of anti-inflammatory drugs that target just those receptors in the relevant microdomain of the cell, without influencing the same receptors in other areas of the cell. However, scientists have never before been able to look in detail at their activity within these tiny microscopic regions of a living cell.

The Nottingham researchers have solved this problem by creating novel drug molecules which have fluorescent labels attached. Using a cutting edge laser technology called fluorescence correlation spectroscopy, the fluorescent drug molecules can be detected as they glow under the laser beam of a highly sensitive microscope. This allows their binding to the receptor to be followed for the first time in real time at the single molecule level.

Leading the project, Professor Steve Hill in the School of Biomedical Sciences said: “These microdomains are so tiny you could fit five million on them on a full stop. There are 10,000 receptors on each cell, and we are able to follow how single drug molecules bind to individual receptors in these specialised microdomains.

“What makes this single molecule laser technique unique is that we are looking at them in real time on a living cell. Other techniques that investigate how drugs bind to their receptors require many millions of cells to get a big enough signal and this normally involves destroying the cells in the process”

The researchers will be using donated blood as a source of A3-receptors in specialised human blood cells (neutrophils) that have important roles during inflammation.

Different types of adenosine receptors are found all over the body and can exist in different areas of the cell membrane and have different properties. Scientists hope that eventually the new technology could also be used to unlock the secrets of the role they play in a whole host of human diseases.

The fluorescent molecules developed as part of the research project will also be useful in drug screening programmes and the University of Nottingham will be making these fluorescent drugs available to the wider scientific community through its links with its spin-out company CellAura Technologies Ltd.