There are two expressions I keep coming across in reading about the future of everything that’s web-based and, social media in particular: “content curation” and the “human algorithm”.
The definition of curation is that it’s the care-taking or presentation of things entered into a collection, either physical or digital. With the onslaught of information from all sides, some sort of curation needs to be implemented to collect, filter, verify and disseminate news, entertainment, human interaction in the broadest sense.
An Algorithm, according to Wikipedia is an effective method expressed in mathematics and computer science as a finite list of well-defined instructions for calculating a function. (Gosh I don’t miss math classes). Algorithms are used for calculation, data processing and automated reasoning. So in a way, an algorithm is the mathematical brother of more artsy curation.
So where does curation and the human algorithm come to play? Where curation means that people manually verify and decide what content to present regardless of the readers on-line behavior, the human algorithm is a program fed by ‘trust agents’ to get you real-time information you’re looking for based on your previous on-line behavior and searches. The human comes as much from your behavior as it does from the behaviors of millions of other on-line users that share some of your on-line habits.
However the above mentioned ‘trust agents’ are key. In curating or in programming the challenge lies to find trustworthy sources and networks of followers with ‘good reputations’. Tweets and social content needs to be tied to networks of so-called trust agents and their sub group of followers. Connectivity – being linked to and linking – is the most important thing to attain trustworthy status. Somebody with a whole lot of followers on Twitter who has a lot of “links” and “recommendations” will, in a Google search on that person, come up over another person with similar content but lesser reputation and trust. This kind of ranking is referred to as the “human algorithm” – I’m oversimplifying this. You might want to read the following: Brian Solis on “The Human Algorithm and how Google ranks Tweets in real-time Search”, Mark Little of Storyful: “The Human Algorithm”, which really talks about curation and Mathew Ingram of Gigaom writing about the ”Future of Media: Curation, Verification and News as a Process”. The last two articles are bit redundant, but both talk extensively about the verification process of actual news stories, which is fascinating and labor intensive.
And to round it up: Soren Gordhamer from Mashable talks about the Future of Social Media and the three pressing questions regarding the future of social media: distraction, filter, and capacity. The first is self-explanatory and so is the third, but I would like to expand on the second a bit, filter: increasingly search engines give us information they THINK we want to see. If you where to Google your neighbor from your home computer and then again from a coffee shop you could quite likely get entirely different results. Google, Bing and other search engines are filtering the search for you based on your browsing history, social media interactions and on-line purchasing habits.
This brings Gordhamer to ask for three options: filtering needs to be transparent, we need to be able to make choices in the filtering applied and there needs to also be an unfiltered option. Gordhamer’s observation is; as we will be increasingly inundated, overwhelmed and clogged up with irrelevant and relevant information with still only 24 hours a day. The new paradigm is no longer the questions of the many different ways of sharing on line, but the question of RELEVANCY.
And with relevancy being the new paradigm shift in the near future of social media we are back to curation and human algorithms. He/she who makes the most noise will be heard! What else is new?
(Originally posted June 2011)