Let’s start with what most people probably can agree. Information is accumulating online. The amount of available information is increasing at an exponential rate, some say it doubles every second year. This mean that any illusion of being able to stay up to date with everything that is going on is utopian and has been probably since Guttenberg invented the press.
Most people know this, yet that is exactly exactly what we all seem to be doing.
There is no shortage of content aggregators and aggregators of aggregators, daily developed to give us a better overview of all the sources of information we have subscribed to and found ourselves now depending on.
This has resulted in an endless stream of articles, news, pictures, websites, products, updates, comments of updates and comments to these comments, being delivered to us second by second that each of us have to deal with.
Constantly checking our feeds for new information, we seem to be hoping to discover something of interest, something that we can share with our networks, something that we can use, something that we can talk about, something that we can act on, something we didn’t know we didn’t know.
It almost seems like an obsession and many critics of digital technology would argue that by consuming information this way we are running the danger of destroying social interaction between humans. One might even say that we have become slaves of the feed.
It might be an obsession, but I think it’s an obsession that many critics will find themselves having to submit to sooner or later.
The digital space is real but different.
Information accumulating is not the only thing that progresses exponentially. Human social interaction also moves online at an accelerating pace, which mean that the consequences of our actions in the digital space exponentially affect what happens not only in the digital space but also in the physical space and vice versa.
If you have doubt about this just ask the music, movie, telco, publishing, financial, news, media, photography, design, illustration, programming, consultancy, accounting and advertising industry. They have all felt the impact of this trend forcing them to re-think how they approach their businesses.
In the digital space there is close to zero friction. The limitations of the physical space do not apply to digital and taking advantage of network effects has never been easier. Whether you are the sender, the receiver or the relayer, information that used to take days or even weeks to reach public mind, now only take hours or even minutes to spread to far corners of the planet. Information is becoming more and more transparent, bringing companies to their knees, unsettling governments and allowing for a new ways to interact globally and instantly.
It’s not without problems though. With the increase in information and near zero friction emerges the issue of noise and redundancy.
To get “signal” we need to plow through our noisy feeds to find the gold-nuggets that are of importance to us. Manual work by which our lacking ability to consume more than one feed item at a time becomes the bottleneck for how fast we can process and evaluate the information. Something gotta give.
This is not the real time web you’ve been looking for
It’s clear that we need information because we orient ourselves more and more through our online living. But it’s also quite obvious that our natural ability to process the very information that we need, don’t scale well.
The paradox we find ourselves in is that on one hand we don’t know what we don’t know so it doesn’t really make sense to exclude any sources of information.
On the other hand, much less than what we are forced to consume is really of relevance but we only find out which after we have consumed it.
In a world where time is one of the most precious resources this doesn’t compute.
We need quality instead of quantity in our feeds. We need a better ability to find the gold nuggets. But as some of you have probably already asked yourself, what is quality? How can we know what is truly of relevance? Thus we find ourselves in an unsettling scenario.
Designing for the bottleneck
In other words, the aggregators that we have are capable of harvesting almost as much information as we want from them, but we have to evaluate each piece of information, meaning that we have to design the aggregators around the bottleneck. Meaning us.
There are attempts to solve this in order to create better quality data streams. Wordburst algorithms that look for when words or sentences suddenly start to peak within a short period of time, is one example. Popularity of a given feed item might be a different approach. But right now most of these algorithms don’t take the individual interest-space into account. Instead they look at global trends and as much as I believe that New Moon the movie is a great youth movie. I was kind of hoping for New Moon the moon when I clicked on the tag in the trend cloud.
We find ourselves in a situation where there is no shortage of information in the digital space but only a very limited ability to extract relevant information thus making us depending on so much manual labor, one would be excused to think that slavery had in fact been re-inserted.
Surely there must be a better way to deal with information. A way to put the laborious task of monitoring information in the hands of the machines we use, rather than on us.
Social machines – our subconscious memory.
One way to do this might be if our machines (computers, cell phones, PDA’s) started exchanging much more information to build tighter relationships with each other. The quality of the data in our feeds right now are depending on what sources we are aware of pointing them to. But so much valuable information is hidden in the exchange between our machines and I believe is one of the main reasons why we are still only designing for the bottleneck.
If I have been visiting the MagmaBooks online shop then all sorts of relevant information could be retrieved. One of these things could be the a physical address if it existed so that the next time I am in London, their machine will inform my machine (location aware mobile) that they are just around the corner from where I am.
In other words, while humans might operate at one level, actively engaging in whatever we might be interested in, our machines should be building machine-social relationships underneath based on these engagements. This way creating a more context aware ecosystem that creates indirect and potentially meaningful relationships without bothering us with having to process the information snippet could emerge.
The way towards better quality in our feeds is not by cutting down on information but by increasing the amount of information. Not by adding yet another source for manual consumption, but by feeding the system, allowing for the exchange of information on a sub-human level machino-e-machino.
That way we can finally start to build the kind of relationship that is necessary for what I am going to talk about next.
Information as matter
Most peoples know that what allows us to read well is not that we spell out each word, letter by letter, but that we read it either word by word, words by words or line by line. Some people are even capable of reading almost entire paragraphs.
Perhaps what our machines should do is to read information snippets the same way we read words and sentences. Perhaps information can be gathered and represented not on an entry by entry basis but as a model of a digital reality based on accumulated information.
Perhaps we need to design for projection rather than the bottleneck?
This means that we must approach information as our brains approach matter. As both discrete objects as well as a whole. This way noise becomes part of the signal and instead of burdening us with having to relate to it on a one to one basis, it’s there to provide the background that meaning will arise from. It’s not a feed we have to go through but part of our reality, overlayed on top of our physical reality.
The sole purpose of information as matter will be to provide us with enough information to reach better projection. The more information we can gather, the higher the fidelity of the projection. The higher the fidelity of the projection the better our feeds become. That is if we can even call them feeds anymore.
Perhaps this is what virtual reality really should mean. Not a 3D projection done by an architect with a specific composition in mind. But rather as a framework for representing information as matter in a landscape that don’t discriminate between noise and signal. When it really comes down to it, isn’t one mans noise another mans signal?
I am not sure what it really means to design for projection. I am aware that it might seem a little far out. I admit that I am not entirely clear on everything, but I know that the current way we approach information can’t be the final thing there is to say about this matter. We need to free ourselves from the manual labor of watching our feeds, we can do so much more with our time. And to do that we need to turn the burden onto the very machines rather than the other way round.
Perhaps starting to think about information differently will free us from the chains we have already been burdended with for too long.
Anyone else out there thinking about this? Let me know what you think.
Update : I was recently interviewed by Phil Windley for his Technometria podcast about this article. Go check it out http://itc.conversationsnetwork.org/shows/detail4402.html