Rolling out mobile phone infrastructure is expensive, difficult and often meets public resistance, but...
Viewing entries in
Rolling out mobile phone infrastructure is expensive, difficult and often meets public resistance, but...
Working in NYC everyday as I do understanding the value of parks and green spaces is obvious. But finding room for green spaces in more and more crowded cities isn’t easy. NYU graduate student Marco Castro Cosio has hit upon the idea of planting gardens on some previously wasted space found on city streets – the roofs of buses. With New York’s Metropolitan Transportation Authority (MTA) running a fleet of around 4,500 buses, each with a surface area of 340 square feet (31.5 m2), Cosio says that if a garden was grown on the roof of every one, there would be an extra 35 acres of rolling green space in the city.
It might sound a bit far fetched but Cosio’s Bus Roots idea has managed to take second place in the DesignWala Grand Idea Competition and a prototype has already been installed on the roof of a vehicle dubbed the BioBus. The prototype garden only covers a small area at the rear of the BioBus’s roof and is mostly growing small succulents, but it has been traveling around New York for the last five months and has even ventured as far as Ohio.
Cosio says the purpose of the Bus Roots project is to reclaim forgotten space, increase the quality of life and grow the amount of green spaces in the city. Amongst the benefits of bringing plant life to the city listed by Cosio are mitigation of the urban heat island effect, acoustical and thermal insulation and CO2 absorption – although you’d have to wonder whether the amount of CO2 soaked up by the bus’s rooftop garden is enough to offset the extra fuel the bus will burn through carting the extra soil and plant life around.
If you’re interested in checking out the prototype BioBus, it will be open to the public at the Orpheum Children’s Science Museum in Urbana-Champaign, Illinois on Sunday, October 10 and at the USA Science & Engineering Festival at the National Mall in Washington D.C. from October 23–24.
|I will be speaking on a panel at the OMMA Social conference Thursday, June 17th in NYC with some folks from Foursquare, Nielsen, SCVNGR and Microsoft about "How Mobile Social will Change Commerce"
The most magical marketing environment for anyone with something to sell would be one that marries the right person, with the right place, with the right product with the right time. But this is no longer a dream. Suddenly, we’re at a point where all of those things can be brought together, with social as the glue connects them. With more and more social activity taking place on mobile, and companies such as Facebook and Google now embracing QR codes, which create a shorthand in which profile data could be read by merchants at the point of sale, the era of in-store customized marketing is almost upon us. What will it look like? And is the early success of companies such as Foursquare indication that portable social profiles are the wave of the future?
The panel will be moderated by Erik Sass from MediaPost.
Other panelists joining me will be:
Eric Friedman, Director of Client Services, Foursquare
Paul Kultgen, Director Mobile Media and Advertising, Nielsen
Chris Mahl, SVP, Chief Brand Alchemist, SCVNGR
Erin Wilson, Mobile Sales Specialist, Microsoft Advertising
http://bit.ly/OMMA_Social - #OMMASocial
My two year old son loves the Moon. He sings about it all day long. He can't wait for nightfall
I visited my local library the other day and it wasn’t exactly a hive of activity. The Internet and the plethora of technologies capable of distributing its content has forced a decline in activity at your local branch. But a scheme designed to embrace modern alternatives to the printed book could breathe new life back into the service. However, local libraries in the UK are now subsidizing conventional methods by offering eBook rentals online.
The service works by offering access to a rentals web site where readers can download books for free, which are then automatically deleted 14 days later, which saves the problem of chasing up overdue books and handing out late fines. Currently, Essex, Luton and Windsor & Maidenhead are offering the service, with Hampshire, Liverpool and Norfolk looking to get involved in the near future.
Luton library representative Fiona Marriot highlights the increasing interest in the scheme, stating "In recent weeks the number of eBook downloads has been increasing fast, and there are people emailing us from all over the country and even abroad asking if they can join as members online." Indeed, more than 250 new users have signed up to get involved in the Luton area, which doesn’t sound like a lot until you realize that only local residents can join the service.
eBooks can obviously be transferred to most eBook readers (though unfortunately not Amazon’s Kindle, which uses proprietary software), and with Asus announcing the world’s cheapest last month and other notable parties such as Barnes & Noble and Plastic Logic getting involved, this seems like a market that’s clearly on the up.
Former president of The Society of Chief Librarians Tory Durcan states "Book issues have seriously declined in recent years. This is an exciting development. These are not going to replace the paper book, they are as well as." He also cites additional advantages for older readers due to the ability to control the print size using an electronic reader, and may even consider lending devices to older, housebound residents.
The scheme looks to redress figures recently announced by the department of Culture, Media and Sport, which state that annual library visits have dropped from 302 million 10 years ago to 280 million and are continuing to fall sharply. So although the new service may help increase the number of library visits, they are likely to be of the virtual kind.
Recently, I had a dialogue with some colleagues (Tina and RJ), about technology and the future. The focus of our discussion was the Metaverse and The Singularity. Although, my colleagues were unfamiliar with these exact terms. I believe the dialog important enough to want to share some thoughts about that discussion and the singularity prior to the Singularity Summit (which is happening in NYC on October 3-4). And I encourage anyone reading this to attend.
Yes, this post is long, but worthwhile, if for no other reason than to share the ideas of The Singularity and the Metaverse as well some new thoughts I had on those subjects.
So, the conversation with my colleagues when like this (paraphrasing):
- "What happens when.. virtual worlds meet geospatial maps of the planet?"
- "When simulations get real and life and business go virtual?"
- "When you use a virtual Earth to navigate the physical Earth, and your avatar becomes your online agent?"
-- "What happens then," I said, "is called the Metaverse."
I recall an observation made by polio vaccine pioneer Dr. Jonas Salk. He said that the most important question we can ask of ourselves is, "are we being good ancestors?"
This is a particularly relevant question for those of us that will be attending the Singularity Summit this year. In our work, in our policies, in our choices, in the alternatives that we open and those that we close, are we being good ancestors? Our actions, our lives have consequences, and we must realize that it is incumbent upon us to ask if the consequences we're bringing about are desirable.
This question was a big part of the conversation with my colleagues. Although, that is not an easy question to answer, in part because it can be an uncomfortable examination. But this question becomes especially challenging when we recognize that even small choices matter. It's not just the multi-billion dollar projects and unmistakably world-altering ideas that will change the lives of our descendants. Sometimes, perhaps most of the time, profound consequences can arise from the most prosaic of topics.
Which is why I'm going to write a bit here about video games.
Well, not just video games, but video games and camera phones (which many of my readers know - I happen to know quite a bit about), and Google Earth and the myriad day-to-day technologies that, individually, may attract momentary notice, but in combination, may actually offer us a new way of grappling with the world. And just might, along the way, help to shape the potential for a safe Singularity.
In the Metaverse Roadmap Overview the authors sketch out four scenarios of how a combination of forces driving the development of immersive, richly connected information technologies may play out over the next decade. But what has struck me more recently about the roadmap scenarios is that the four worlds could also represent four pathways to a Singularity. Not just in terms of the technologies, but—more importantly—in terms of the social and cultural choices we make while building those technologies.
The four metaverse worlds emerged from a relatively commonplace scenario structure. The authors arrayed two spectra of possibility against each other, thereby offering four outcomes. Analysts sometimes refer to this as the "four-box" method, and it's a simple way of forcing yourself to think through different possibilities.
This is probably the right spot to insert this notion: scenarios are not predictions, they're provocations. They're ways of describing different future possibilities not to demonstrate what will happen, but to suggest what could happen. They offer a way to test out strategies and assumptions—what would the world look like if we undertook a given action in these four futures?
To construct the scenario set the authors selected two themes likely to shape the ways in which the Metaverse unfolds: the spectrum of technologies and applications ranging from augmentation tools that add new capabilities to simulation systems that model new worlds; and the spectrum ranging from intimate technologies, those that focus on identity and the individual, to external technologies, those that provide information about and control over the world around you. These two spectra collide and contrast to produce four scenarios.
The first, Virtual Worlds, emerges from the combination of Simulation and Intimate technologies. These are immersive representations of an environment, one where the user has a presence within that reality, typically as an avatar of some sort. Today, this means World of Warcraft, Second Life, PlayStation Home and the like.
Over the course of the Virtual Worlds scenario, we'd see the continued growth and increased sophistication of immersive networked environments, allowing more and more people to spend substantial amounts of time engaged in meaningful ways online. The ultimate manifestation of this scenario would be a world in which the vast majority of people spend essentially all of their work and play time in virtual settings, whether because the digital worlds are supremely compelling and seductive, or because the real world has suffered widespread environmental and economic collapse.
The next scenario, Mirror Worlds, comes from the intersection of Simulation and Externally-focused technologies. These are information-enhanced virtual models or “reflections” of the physical world, usually embracing maps and geo-locative sensors. Google Earth is probably the canonical present-day version of an early Mirror World.
While undoubtedly appealing to many individuals, in my view, the real power of the Mirror World setting falls to institutions and organizations seeking to have a more complete, accurate and nuanced understanding of the world's transactions and underlying systems. The capabilities of Mirror World systems is enhanced by a proliferation of sensors and remote data gathering, giving these distributed information platforms a global context. Geospatial, environmental and economic patterns could be easily represented and analyzed. Undoubtedly, political debates would arise over just who does, and does not, get access to these models and databases.
Thirdly, Augmented Reality looks at the collision of Augmentation and External technologies. Such tools would enhance the external physical world for the individual, through the use of location-aware systems and interfaces that process and layer networked information on top of our everyday perceptions.
Augmented Reality makes use of the same kinds of distributed information and sensory systems as Mirror Worlds, but does so in a much more granular, personal way. The AR world is much more interested in depth than in flows: the history of a given product on a store shelf; the name of the person waving at you down the street (along with her social network connections and reputation score); the comments and recommendations left by friends at a particular coffee shop, or bar, or bookstore. This world is almost vibrating with information, and is likely to spawn as many efforts to produce viable filtering tools as there are projects to assign and recognize new data sources.
Lastly, we have Lifelogging, which brings together Augmentation and Intimate technologies. Here, the systems record and report the states and life histories of objects and users, enhancing observation, recall, and communication. I've sometimes discussed one version of this as the "participatory panopticon."
Here, the observation tools of an Augmented Reality world get turned inward, serving as an adjunct memory. Lifelogging systems are less apt to be attuned to the digital comments left at a bar than to the spoken words of the person at the table next to you. These tools would be used to capture both the practical and the ephemeral, like where you left your car in the lot and what it was that made your spouse laugh so much. Such systems have obvious political implications, such as catching a candidate's gaffe or a bureaucrat's corruption. But they also have significant personal implications: what does the world look like when we know that everything we say or do is likely to be recorded?
This underscores a deep concern that crosses the boundaries of all four scenarios: trust.
"Trust" encompasses a variety of key issues: protecting privacy and being safely visible; information and transaction security; and, critically, honesty and transparency. It wouldn't take much effort to turn all four of these scenarios into dystopias. The common element of the malevolent versions of these societies would be easy to spot: widely divergent levels of control over and access to information, especially personal information. The ultimate importance of these scenarios isn't just the technologies they describe, but the societies that they create.
So what do these tell us about a Singularity?
Across the four Metaverse scenarios, we can see a variety of ways in which the addition of an intelligent system would enhance the audience's experience. Dumb non-player characters and repetitive bots in virtual worlds, for example, might be replaced by virtual people essentially indistinguishable from characters controlled by human users. Efforts to make sense of the massive flows of information in a Mirror World setting would be enormously enhanced with the assistance of sophisticated machine analyst. Augmented Reality environments would thrive with truly intelligent agent systems, knowing what to filter and what to emphasize. In a lifelogging world, an intelligent companion in one's mobile or wearable system would be needed in order to figure out how to index and catalog memories in a personally meaningful way; it's likely that such a system would need to learn how to emulate your own thought processes, becoming a virtual shadow.
None of these systems would truly need to be self-aware, self-modifying intelligent machines—but in time, each could lead to that point.
But if the potential benefits of these scenarist worlds would be enhanced with intelligent information technology, so too would the dangers. Unfortunately, avoiding dystopian outcomes is a challenge that may be trickier than some may expect—and is one with direct implications for all of our hopes and efforts for bringing about a future that would benefit human civilization, not end it.
It starts with a basic premise: software is a human construction. That's obvious when considering code written by hand over empty pizza boxes and stacks of paper coffee cups. But even the closest process we have to entirely computer-crafted software—emergent, evolutionary code—still betrays the presence of a human maker: evolutionary algorithms may have produced the final software, and may even have done so in ways that remain opaque to human observers, but the goals of the evolutionary process, and the selection mechanism that drives the digital evolution towards these goals, are quite clearly of human origin.
To put it bluntly, software, like all technologies, is inherently political. Even the most disruptive technologies, the innovations and ideas that can utterly transform society, carry with them the legacies of past decisions, the culture and history of the societies that spawned them. Code inevitably reflects the choices, biases and desires of its creators.
This will often be unambiguous and visible, as with digital rights management. It can also be subtle, as with operating system routines written to benefit one application over its competitors (I know some of you reading this are old enough to remember "DOS isn't done 'til Lotus won't run"). Sometimes, code may be written to reflect an even more dubious bias, as with the allegations of voting machines intentionally designed to make election-hacking easy for those in the know. Much of the time, however, the inclusion of software elements reflecting the choices, biases and desires of its creators will be utterly unconscious, the result of what the coders deem obviously right.
We can imagine parallel examples of the ways in which metaverse technologies could be shaped by deeply-embedded cultural and political forces: the obvious, such as lifelogging systems that know to not record digitally-watermarked background music and television; the subtle, such as augmented reality filters that give added visibility to sponsors, and make competitors harder to see; the malicious, such as mirror world networks that accelerate the rupture between the information haves and have-nots—or, perhaps more correctly, between the users and the used; and, again and again, the unintended-but-consequential, such as virtual world environments that make it impossible to build an avatar that reflects your real or desired appearance, offering only virtual bodies sprung from the fevered imagination of perpetual adolescents.
So too with what we today talk about as a "singularity." The degree to which human software engineers actually get their hands dirty with the nuts & bolts of AI code is secondary to the basic condition that humans will guide the technology's development, making the choices as to which characteristics should be encouraged, which should be suppressed or ignored, and which ones signify that "progress" has been made. Whatever the degree to which post-singularity intelligences would be able to reshape their own minds, we have to remember that the first generation will be our creations, built with interests and abilities based upon our choices, biases and desires.
This isn't intrinsically bad; emerging digital minds that reflect the interests of their human creators is a lever that gives us a real chance to make sure that a "singularity" ultimately benefits us. But it holds a real risk. Not that people won't know that there's a bias: we've lived long enough with software bugs and so-called "computer errors" to know not to put complete trust in the pronouncements of what may seem to be digital oracles. The risk comes from not being able to see what that bias might be.
Many of us rightly worry about what might happen with "Metaverse" systems that analyze our life logs, that monitor our every step and word, that track our behavior online so as to offer us the safest possible society—or best possible spam. Imagine the risks associated with trusting that when the creators of emerging self- aware systems say that they have our best interests in mind, they mean the same thing by that phrase that we do.
For me, the solution is clear. Trust depends upon transparency. Transparency, in turn, requires openness.
We need an Open Singularity.
At minimum, this means expanding the conversation about the shape that a singularity might take beyond a self-selected group of technologists and philosophers. An "open access" singularity, if you will. Ray Kurzweil's books and lectures are a solid first step, but the public discourse around the singularity concept needs to reflect a wider diversity of opinion and perspective.
If the singularity is as likely and as globally, utterly transformative as many here believe, it would be profoundly unethical to make it happen without including all of the stakeholders in the process—and we are all stakeholders in the future.
World-altering decisions made without taking our vast array of interests into account are intrinsically flawed, likely fatally so. They would become catalysts for conflicts, potentially even the triggers for some of the "existential threats" that may arise from transformative technologies. Moreover, working to bring in diverse interests has to happen as early in the process as possible. Balancing and managing a global diversity of needs won't be easy, but it will be impossible if democratization is thought of as a bolt-on addition at the end.
Democracy is a messy process. It requires give-and-take, and an acknowledgement that efficiency is less important than participation.
We may not have an answer now as to how to do this, how to democratize the singularity. If this is the case—and I suspect that it is—then we have added work ahead of us. The people who have embraced the possibility of a singularity should be working at least as hard on making possible a global inclusion of interests as they do on making the singularity itself happen. All of the talk of "friendly AI" and "positive singularities" will be meaningless if the only people who get to decide what that means are the few hundred who read and understand this blog posting.
My preferred pathway would be to "open source" the singularity, to bring in the eyes and minds of millions of collaborators to examine and co-create the relevant software and models, seeking out flaws and making the code more broadly reflective of a variety of interests. Such a proposal is not without risks. Accidents will happen, and there will always be those few who wish to do others harm. But the same is true in a world of proprietary interests and abundant secrecy, and those are precisely the conditions that can make effective responses to looming disasters difficult. With an open approach, you have millions of people who know how dangerous technologies work, know the risks that they hold, and are committed to helping to detect, defend and respond to crises. That these are, in Bill Joy's term, "knowledge-enabled" dangers means that knowledge also enables our defense; knowledge, in turn, grows faster as it becomes more widespread. This is not simply speculation; we've seen time and again, from digital security to the global response to influenza, that open access to information-laden risks ultimately makes them more manageable.
The Metaverse Roadmap offers a glimpse of what the next decade might hold, but does so recognizing that the futures it describes are not end-points, but transitions. The choices we make today about commonplace tools and everyday technologies will shape what's possible, and what's imaginable, with the generations of technologies to come. If the singularity is in fact near, the fundamental tools of information, collaboration and access will be our best hope for making it happen in a way that spreads its benefits and minimizes its dangers—in short, making it happen in a way that lets us be good ancestors.
If we're willing to try, we can create a future, a singularity, that's wise, democratic and sustainable—a future that's open. Open as in transparent. Open as in participatory. Open as in available to all. Open as in filled with an abundance of options.
The shape of tomorrow remains in our grasp, and will be determined by the choices we make today. Choose wisely.
I have been thinking about how humans learn now and could learn in the future. Recently, various studies have been published including some that document the amazing amount of brain development that happens in infants and later on in childhood.
This is especially relevant to me as my son is just about to be 17 months old. And as I watch him grow and learn. I have been doing some new thinking about education lately. I also do a great deal of work for Education Testing Service. Founded in 1947, ETS develops, administers and scores more than 50 million knowledge metrics or tests annually.
So, here's my prognostication - in the future, more and more of us will learn from social robots, especially kids learning pre-school skills and students of all ages studying a new language.There will be a new science of learning," which brings together recent findings from the fields of psychology, neuroscience, machine learning and education.
The premise for my thinking: We humans are born immature and naturally curious, and become creatures capable of highly complex cultural achievements — such as the ability to build schools and school systems that can teach us how to create computers that mimic our brains.
With a stronger understanding of how this learning happens, scientists are coming up with new principles for human learning, new educational theories and designs for learning environments that better match how we learn best.And, social robots have a potentially growing role in these future learning environments. The mechanisms behind these sophisticated machines apparently complement some of the mechanisms behind human learning.
One such robot, which looks like the head of Albert Einstein, was revealed this week to show facial expressions and react to real human expressions. The researchers who built the strikingly real-looking yet body-less 'bot plan to test it in schools.
In the first 5 years of life, our learning is exuberant and "effortless. We are born learning, and adults are driven to teach infants and children. During those years and up to puberty, our brains exhibit "neural plasticity" — it's easier to learn languages, including foreign languages. It's almost magical how we learn a foreign language, what becomes our native tongue, in the first two or three years we're alive.
Magic aside, our early learning is computational.
Children under three and even infants have been found to use statistical thinking, such as frequency distributions and probabilities and covariation, to learn the phonetics of their native tongue and to infer cause-effect relationships in the physical world.
Some of these findings have helped engineers build machines that can learn and develop social skills, such as BabyBot, a baby doll trained to detect human faces.
Meanwhile, our learning is also highly social, so social, in fact, that newborns as young as 42 minutes old have been found to match gestures shown to them, such as someone sticking out her tongue or opening his mouth.
Imitation is a key component to our learning — it's a faster and safer way to learn than just trying to figure something out on our own.
Even as adults, we use imitation when we go to a new setting such as a dinner party or a foreign country, to try and fit in. Of course, for kids, the learning packed into every day can amount to traveling to a foreign country. In this case, they are "visiting" adult culture and learning how to act like the people in our culture, becoming more like us.
If you roll all these human learning features into the field of robotics, there is a somewhat natural overlap — robots are well-suited to imitate us, learn from us, socialize with us and eventually teach us.
Social robots are being used on an experimental basis already to teach various skills to preschool children, including the names of colors, new vocabulary words and simple songs.
In the future, robots will only be used to teach certain skills, such as acquiring a foreign or new language, possibly in playgroups with children or to individual adults. But robot teachers can be cost-effective compared to the expense of paying a human teacher.
If we can capture the magic of social interaction and pedagogy, what makes social interaction so effective as a vehicle for learning, we may be able to embody some of those tricks in machines, including computer agents, automatic tutors, and robots.
Still, children clearly learn best from other people and playgroups of peers, and I don't see children in the future being taught entirely by robots.
Terrance Sejnowski of the Temporal Dynamics of Learning Center (TDLC) at the University of California at San Diego, is working on using technology to merge the social with the instructional, and bringing it to bear on classrooms to create personalized, individualized teaching tailored to students and tracking their progress.
"By developing a very sophisticated computational model of a child's mind, we can help improve that child's performance," Sejnowski said.
Overall, the hope, in my mind anyway, would be to figure out how to combine the passion and curiosity for learning that children display with formal schooling. There is no reason why curiosity and passion can’t be fanned at school where there are dedicated professionals, teachers, trying to help children learn. Right?
A world where humans, motor vehicles and the infrastructure will wirelessly communicate and co-operate for the greater good came a step closer this week when the European Commission reserved part of the radio spectrum for smart vehicle communications systems (so called co-operative systems). I have talked about telematics in my lectures and writing before and this a part of that whole concept.
The networked road system envisaged by the European Intelligent Car Initiative promotes the use ICT to achieve smarter, safer and cleaner road transport and comes not a moment too soon – already 24% of European driving time is spent in traffic jams, and it’ll get much worse before it gets better. Research suggests the costs caused by traffic congestion could reach EUR80 billion by 2010.
The wireless system will allow cars to 'talk' to other cars and to the road infrastructure providers. The system will, for example, warn other drivers of slippery roads or of a crash which just happened. Smart vehicle communication systems have the potential to make safer and ease the lives of Europe's drivers: in 2006, more than 42,000 people died in road accidents in the European Union and more than 1.6 million were injured while every day there are some 7,500 km of traffic jams on the EU's roads. The Commission decision is intended to foster investment in smart vehicle communication systems by the automotive industry, at the same time spurring public funding in essential roadside infrastructure.
The Commission decision provides a single EU-wide frequency band that can be used for immediate and reliable communication between cars, and between cars and roadside infrastructure. It is 30 MHz of spectrum in the 5.9 Gigahertz (GHz) band which will be allocated within the next six months by national authorities across Europe to road safety, without barring other services already in place (such as amateur radio services). EU Telecoms Commissioner Viviane Reding described the decision as “a decisive step towards meeting the European goal of reducing road accidents.”
“Getting critical messages through quickly and accurately is a must for road safety,” she said. “We should also keep in mind that with 24% of Europeans' driving time spent in traffic jams the costs caused by congestion could reach €80 billion by 2010. So clearly saving time through smart vehicles communications systems means saving money."
A typical example is the case of a vehicle detecting a slippery patch on a road: if it is equipped with a cooperative car-to-car communication device, it can deliver this information to all cars located nearby. If a traffic management centre needs to inform drivers about a sudden road closure, the alternative route to take or a change in speed limits, it will also be able to send this information to a transmitter detector along the respective road, which then passes it on to vehicles driving in the vicinity.
According to an article published today in The Independent in a new survey of 700 children - they have seemingly lost touch with the natural world and are unable to identify common animals and plants.
Half of youngsters aged nine to 11 were unable to identify a daddy-long-legs, oak tree, blue tit or bluebell, in the poll by BBC Wildlife Magazine. The study also found that playing in the countryside was children's least popular way of spending their spare time, and that they would rather see friends or play on their computer than go for a walk or play outdoors.
Personally, I don't find this terribly shocking or a cause for any alarm.
If, the emotive aspects of synthetic environments are developed with appropriate consideration the end result could be really valuable. That has been the premise of educational programming since it's inception. The impact, reach and influence of technology has changed and our expectations of the child's view of the world must shift accordingly.
The dichotomy between the physical and synthetic worlds will continue to grow exponentially. With our new economies and technologies it makes perfect sense that the natural order of things in the minds of children has shifted to focus more on cataloging, engaging and understanding synthetic worlds.Here's a link to the original article.
According to a new prediction from Nokia, up to 25% of the entertainment consumed by people in 2012 will have been created, edited and shared within their peer circle rather than coming out of traditional media sources. This user-generated content phenomenon has been dubbed “Circular Entertainment” and could be the future of news information delivery.
The statement from Nokia is backed by a global study, entitled "A Glimpse of the Next Episode", carried out by The Future Laboratory and views from industry leading figures with Nokia's own research from its 900 million consumers around the world. The mobile phone giant has constructed a global picture of what it believes entertainment will look like over the next five years. With a marked rise in awareness of movements such as Wikipedia, Creative Commons and blogging, there has been a shift in thinking regarding user-generated content. No longer is it considered untrustworthy or inaccurate as was the case many years ago. “The trends we are seeing show us that people will have a genuine desire not only to create and share their own content, but also to remix it, mash it up and pass it on within their peer groups - a form of collaborative social media," said Mark Selby, Vice President, Multimedia, Nokia. Of the 9000 people surveyed in the Future Laboratory study a staggering 39% watch TV on the internet, - 46% regularly use an instant messenger program and 29% regularly blog.
Nokia's views Circular Entertainment working like this - someone shares video footage they shot on their mobile phone from a night out with a friend, that friend takes that footage and adds an MP3 file, then passes it to another friend. That friend edits the footage by adding some photographs and passes it on to another friend and so on. The content keeps circulating between friends. Interesting.
According to Tom Savigar, Trends Director at The Future Laboratory, "Consumers are increasingly demanding their entertainment be truly immersive, engaging and collaborative. Whereas once the act of watching, reading and hearing entertainment was passive, consumers now and in the future will be active and unrestrained by the ubiquitous nature of circular entertainment.” This “immersive living” is the rise of lifestyles which blur the reality of being on and offline. Entertainment will no longer be segmented; people can access and create it wherever they are.
Well, no kidding Nokia...