Thursday, September 28, 2006

2044

I stand before you, gentlemen and ladies, to examine the events that led to our present day. The world has changed more in the last fifty years than it has in all its Promethean origins. There are no wars, famine, or civil strife. Our illusion of perfection, punctuated by spotless, equitable cityscapes and monolithic, inoffensive skylines, seems complete, seamlessly integrated into our new worldview. We must remember that this is just an illusion, as well as the price we pay for such “perfection.”

There was a time, before many of you were born, when new products were conceived by groups of individuals. The economy thrived on these ideas, embracing the archaic ideals of “originality” and “individuality.” In essence, the creator owned his or her creation, distributing it as he or she wished. The ultimate irony of this system lies in the conception of the internet, which lead to the death of this school of thought. With the web came commonality. Suddenly, we could network the greatest minds of a time period, bringing them from their disparate locations to work on a single problem. Among the first to embrace this phenomenon were Wikipedia and Linux, forerunners to the Halls of Knowledge and Control. Open source soon became ubiquitous, easily outstripping the private sector with cheaper and better products. We started mimicking this practice in politics, industry, and entertainment. The philosophy of many minds over one became ingrained in our culture.

Finally, we took this idea to its extreme: if several minds are better than one, then all minds are better than several. This was the essence of Open Source, Open Mind, a technology that itself was peer edited. The program took opinions directly from the minds of its users, broadcasting them to a central database which would then compile the best possible amalgamate of a project from this data. Political and social strife ended as Open Mind created millions of fair and equitable plans and technologies, ensuring that the greatest benefit was presented to the collective. In this day and age, ownership has become obsolete. All people contribute to the Open Mind, and all its creations belong to the cooperative. We willingly give total control to this system, lauding the benefits while remaining blind to the dire consequences.

Although Open Mind originally created a flurry of innovations, the flood of ideas waned to a trickle as humanity began to express less and less sentiments for change. Soon, any new proposed project merely whittled down as people expressed so many conflicting viewpoints that Open Mind had no choice but to create one inoffensive to all. Thus, our present society is not one of enlightenment and virtue, but of stagnation and political correctness. The notion of artistic vision has been murdered, destroyed by the diversity of human spirit that, ironically, made open source popular. As I walk through the streets of our fair Utopia, I see none of the fierce hatred or sadness that permeated olden times, but neither do I see passionate exuberance or creativity. We, as a race, are content with our present situation, but as individuals, as man, we falter in it. You see, while freedom from strife is a luxury which Open Mind affords us, what it denies us is something far more fundamental and necessary: invention.

Wikipedia- An example of the fallacies of open-sharing

Up until last month, I have shamefully used Wikipedia as my principal source of information for everything from tissue subculture to tennis rankings. I, like many of my peers, had great faith in the open-sharing format and truly believed editors and reviewers would rapidly weed out any fallacies in the published information. Much to my disappointment, this was not the case.

As I was finishing an internship at Novartis this summer, my final project entailed creating a poster to summarize my work. To find some background information, I performed a Google search and clicked on the first link I saw- a Wikipedia article. Never thinking to double check the accuracy of the article, I included the information in my poster. The next day when I presented my summer work, the judges immediately looked at the poster and pointed out that the Wikipedia information I had included was an outdated and false opinion. I was shocked: could my trusted Wikipedia have actually provided me with false information?

As I think about it more, the greatest advantage of Wikipedia, the ability of anyone to add/ edit anything, is also its’ biggest drawback. As far as I know, the article I read about Darwin’s theory of evolution could have been written by a proponent of the Church of Scientology, or Tom Cruise himself. The problem with open-sharing is that many times it allows opinion to override fact. Posting information has become just as much about politicizing an opinion as it is about providing an accurate account. And when this politicization gets out of control, people like John Seigenthaler have their names slandered and jobs put into jeopardy.

As I learned from my Novartis poster, most contributions to Wikipedia are poorly researched, if they are researched at all. Often times, the accuracy of an article will depend solely on one website from which the contributor obtained the information. As a result, a substantial amount of information on Wikipedia is highly biased, has many gaps, or is just plain wrong. Furthermore, texts for articles in Wikipedia are often directly copied from other websites. And due to the open format of Wikipedia, no copyright holder can do anything about this.

The Wiki generation has created noise and chaos, not knowledge. Previous Encyclopedias contained precise information that was well researched by trusted sources. Wikipedia, on the other hand, contains a large amount of errors and superfluous facts. While I agree that Wikipedia is a good source for general knowledge, it should never be trusted as a primary source for information and should be recognized by its users as an unreliable and untrustworthy reference.

Wikipedia- An example of the fallacies of open-sharing

Up until last month, I have shamefully used Wikipedia as my principal source of information for everything from tissue subculture to tennis rankings. I, like many of my peers, had great faith in the open-sharing format and truly believed editors and reviewers would rapidly weed out any fallacies in the published information. Much to my disappointment, this was not the case.

As I was finishing an internship at Novartis this summer, my final project entailed creating a poster to summarize my work. To find some background information, I performed a Google search and clicked on the first link I saw- a Wikipedia article. Never thinking to double check the accuracy of the article, I included the information in my poster. The next day when I presented my summer work, the judges immediately looked at the poster and pointed out that the Wikipedia information I had included was an outdated and false opinion. I was shocked: could my trusted Wikipedia have actually provided me with false information?

As I think about it more, the greatest advantage of Wikipedia, the ability of anyone to add/ edit anything, is also its’ biggest drawback. As far as I know, the article I read about Darwin’s theory of evolution could have been written by a proponent of the Church of Scientology, or Tom Cruise himself. The problem with open-sharing is that many times it allows opinion to override fact. Posting information has become just as much about politicizing an opinion as it is about providing an accurate account. And when this politicization gets out of control, people like John Seigenthaler have their names slandered and jobs put into jeopardy.

As I learned from my Novartis poster, most contributions to Wikipedia are poorly researched, if they are researched at all. Often times, the accuracy of an article will depend solely on one website from which the contributor obtained the information. As a result, a substantial amount of information on Wikipedia is highly biased, has many gaps, or is just plain wrong. Furthermore, texts for articles in Wikipedia are often directly copied from other websites. And due to the open format of Wikipedia, no copyright holder can do anything about this.

The Wiki generation has created noise and chaos, not knowledge. Previous Encyclopedias contained precise information that was well researched by trusted sources. Wikipedia, on the other hand, contains a large amount of errors and superfluous facts. While I agree that Wikipedia is a good source for general knowledge, it should never be trusted as a primary source for information and should be recognized by its users as an unreliable and untrustworthy reference.

Wednesday, September 27, 2006

Wikipedia: Not as chaotic as you might think

Wikipedia has four key policies to help ensure the project stays true to its original intentions. 1) Wikipedia is an encyclopedia. 2) Respect other contributors. 3) Don't infringe copyrights. 4) Avoid Bias. It is very important to understand these policies before making statements about the purpose of Wikipedia. First and foremost, Wikipedia is an encyclopedia "Its goals go no further." Wikipedia is meant to be used as reference, but not as a primary source. As with other encyclopedias, Wikipedia's goal is to give a basic summary of the facts about a subject. Posting original research on Wikipedia is against the rules. Instead, users are encouraged to give a brief summary of the information and then cite the original source. Another very important aspect of Wikipedia is the fourth key policy. Avoid Bias. It takes a lot of effort to keep all of the articles in any encylcopedia free of bias and Wikipedia's sheer number of topics and articles makes it extremely difficult to keep them all unbiased. This is why the objectivity of Wikipedia's articles is a frequent subject of debate.

Wikipedia articles dealing with controversial or current issues present a challenge for users to regulate. Topics of this sort often include contradictory ideas, which obfuscate the truth. Some of the most difficult articles to keep objective are those dealing with political figures or issues. In a realm such as politics where the facts rarely agree with the "truth," objectivity is nearly impossible to maintain. An example of this is the article on George W. Bush. Even though the article is almost entirely composed of facts, many people would say the article is biased because of the importance or emphasis placed on certain facts. Even the Wikipedia entry for Wikipedia has a banner on a section of the article questioning the neutrality of the section. Although, because of this banner, that section of the article will likely be fixed soon.

Wikipedia is not the chaotic, unstructured blob of information some believe it to be. Wikipedia has many tools in place to make sure some amount of order is kept. This banner is a good example of these tools. Enough people said in the discussion section of the article that they did not like the section in question, but that they couldn't figure out how to clean it up properly so they decided to warn the readers that the neutrality may be compromised and to ask for help from anyone that had an idea of how to fix it. Wikipedia has implemented many measures such as these to assist those who want Wikipedia to be an objective source of information and are willing to put the time in to make sure it stays that way.

Why play Second Life?

In chapter three of Yochai Benkler’s book, The Wealth of Networks, he mentions massive multiplayer online games. I started thinking about my gaming experience. I realize that I don’t understand how one can find continuous enjoyment in playing a game such as Second Life. A game like this which just simulates reality takes users out of the real world. By that I mean they stop living their actual life.

Gaming in the past took people out of their normal reality and put them into an alternate universe where they could be any sort of being. The kind of games I play are sports games put out by EA such as Madden or FIFA Road to the World Cup and games like Halo or Final Fantasy. With all these games I become for that time a person or persons that have more ability than me. I can be Tiki Baber or Ranaldinho. In Second Life this alternate reality is basically a simulation of the world today. Second Life does offer the gamer the ability to become almost anyone they want to be, but it is still the gamer living that virtual day to day life. They can buy a house and become rich and buy extravagant virtual objects. But why not do this in the real world and yourself actually wealthy. Why not live your day to day life to the fullest.

I guess a counter argument can be made about what I just said. Why don’t I go out and play football or soccer when I want to play video games. My response to that is that the NFL and International soccer are completely different experiences then that of play backyard football. As a varsity athlete both at MIT and back in high school and a big sports fan, I know that the feeling one gets from a sports game is similar to actually being there. This sort of game simulates a moment of life, in my case playing high school football underneath the lights on a Friday night, but with Second Life you are simulating a whole other life. One would be living two lives, one of which is just zeros and ones. Most things that can be done on Second Life can also be done in the real world.

So not to be totally negative to those who actually play this game there do seem to be some positives to the game. One positive I found was that everyone can create things in the game that they can use for trade or to make money. Making money, while being entertained by a game is a definite positive.

I am not completely hateful of games like Second Life. I just understand more why a game like Halo is played. Playing Halo can be analogous to reading a science fiction book about saving the human race from aliens. I feel that Second Life does not offer much outside of what a person can do living a normal productive life.

Media Split

Open-source peer produced software constitutes more and more of common applications used today. Yochai Benkler describes the phenomenon in The Wealth of Networks, giving examples of how major commercial and government entities now run on GNU/Linux servers and other free software that, because of their peer-produced origins, often perform better commercial counterparts. That peer-collaborated projects often produce better results holds especially true in the world of software. Only software that is highly specialized, difficult to replicate, and often exorbitantly priced seems to be immune to peer-production- advanced media tools like Macromedia Flash, high-end video-editing software, and engineering programs never see open-source comparisons. As the common media user begins to use free software and the corporate world continues in its use of commercial programs, the spread of open-source projects leads to another line of social schism.

The ever-expanding peer resources and immediate adaptability of open-source software makes any program seen as necessary free. That notion began even before open-source became commonplace. For example, Netscape was once a commercial, priced product, but Window’s release of Internet Explorer quickly made the public expect all web browsers to be free. Mozilla Firefox’s recent emergence as an open-source web browser cemented that notion, providing many more features than Internet Explorer despite being free. Open-source programs for word processing, spreadsheets, media players, and countless other utilities have popped up, slowly replacing the need for Microsoft’s Window Productivity Suite. Even Window as an operating system comes into question with GNU/Linux’s popularity as a more stable, adaptable, and preferred server operating system with many computer-programming adepts. In general, any utility that benefits many media users finds itself open-sourced, making it possible and growingly popular for the average person to live their media lives entirely off free, open-source software.

On the other end of that scale is the corporate world. With intensive use of specialized commercial programs, it’s easy to see why open-source peer collaboration does not work here- what company would open-source the programs that give them an advantage? What programmer would collaborate on a program he or she is not at all likely to use? Yochai Benkler’s example with IBM supporting open-source GNU/Linux is a notable exception, but GNU/Linux is a commonly available operating system whose improvement would benefit IBM. More “elite” programs are never open-sourced- there are no companies willing to give out the code for Adobe Photoshop or for Swift 3D. Even if there were collaborators willing to create open-source comparisons, the programming involved is immensely difficult. The general public will probably never have (legal) access to any of these programs and will have no need for them, either.

In the near future, and to a certain extent today, a person’s computer may be as telling of their social status as their house or their car. An average consumer’s laptop will run on Linux, completely filled with other open-source programs, while the corporate elite’s will run on Windows and host a complete army of “elite” commercial programs. Media will become a social schism, and information will continue becoming currency.

Objectivity of Wikipedia

The Wikipedia is based upon a set of standards known as the “Five Pillars” that create its virtual embodiment as a viable reference material. These consist of the facts that Wikipedia is an encyclopedia, has a neutral point of view, is free content, has a code of conduct, and does not have firm rules. These criteria, if followed, are meant to maintain the objectivity of the site a majority of Internet users have come to trust, but if out of nothing but pure curiosity, it is interesting to take a look at the Wikipedia page for neutrality. How objective can a purely-facts website be about itself? Running through the Five Pillars should qualify as an accurate checklist.

Wikipedia is an encyclopedia. An encyclopedia by definition is “a reference work… containing articles on various topics… dealing with the entire range of human knowledge or with some particular specialty.” The Wikipedia defines itself as a “Web-based free-content multilingual encyclopedia project.” The fact is that Wikipedia is a vast collection of web articles that nearly covers all aspects of human history with small gaps in minute articles such as the population of a small village in Russia, an event center in Arizona, etc. It qualifies as an encyclopedia.

Wikipedia has a neutral point of view. After reading through the article, one should pay particular attention to the areas in which the criticisms are mentioned. Bringing up the hot topic of John Seigenthaler’s defamation, it seems that the online encyclopedia could have included more regarding the subject. The exclusion of the link to Seigenthaler’s article on the page seems a bit suspicious. One can find out more if they look up a news article regarding the incident including the text that refered to Seigenthaler’s involvement with the Kennedy assassinations. A lack of objectivity can be found here not in what was on the the page but rather in what was excluded.

Wikipedia is free content. It interesting to note that the Wikipedia is made available under the GNU Free Documentation License, a counterpart to the same license that Richard Stallman helped to design with his Free Software Foundation. However, in the Wikipedia article under its Editing section, it is written that the Wikipedia is developed “much the same way that open-source software [is] develop[ed].” With current discussions over the difference between Free Software and Open-Source, it seems odd to include the terms in the same definition. So, what is it? If it the Wikipedia has abandoned its Free Software roots, could it be influenced by the premise of the corporate mainstream as other programs have? Time will tell.

Wikipedia has a code of conduct. The Wikipedia code of conduct is meant to inspire a generally peaceful flow mutual information between all who wish to access it. However, with the combination of many controversial topics and the anonymity of the web, this code can be easily ignored. Incidents include the constant attack of the biography of Gordon Brown, the current Chancellor of the Exchequer of the United Kingdom. In this attack, a determined hacker (or group) will occasionally delete the entire article and replace it with the single word, “Tax.” Even though these situations may be humorous, they act as proof against the consistant reliablity of the Wikipedia and discredit even further.

Wikipedia does not have firm rules. The final pillar acts as a means to inspire creativity without consequence. However, when this page is selected, a discouraging message is displayed. “Significant revisions are proposed to this policy or guideline.” How can one go about defining a rule to ignore rules? The paradoxical answer can’t be defined in a manner that will satisfy every person that asks the question because the answer is relative to the situation in which it is being asked of. Therefore, with this principle in place, the Wikipedia can never be set into a balance on all subjects because any given person that can edit the wiki (i.e. everyone) carries his or her own opinion on some given subject, making Wikipedia not 100% objective.

The overall analysis of Wikipedia can be disconcerting, bringing in questions from a lack of expertise, varied editor motivations, and even a foundation based upon a disregard for its own rules. The objectivity of this online collection can be, if nothing else, a little shady. However, when compared to other reference material, Wikipedia reigns supreme in terms of ease, availability, and overall happiness for its users. So, if presented with a choice on which tool to use when writing a report, the writer must ask if objectivity is truly the most important quality to consider when choosing a resource.

Open Source Vs. Free software

Economists often joke that they spend their time trying to understand a “miserable science.” And who can argue that it isn’t miserable that firms can’t do things like reduce pollution emissions unless there is sufficient economic incentive? Surely the CEO of an energy company would rather be helping rather than hurting the environment, it’s their air too after all. But the CEO knows that everybody else in the industry is playing by the rules of game theory and that if his/her firm spends the money for cleaner air, they won’t survive in today’s market driven by cost cutting and profit seeking. Capitalism seems to inhibit good will in almost all cases except one, open source and free software.
Richard Stallman was the maverick who back in 1984, went against the grain of economic incentive with his use of the General Public License or GPL. By ensuring that information and programs would be available to all without any profit seekers attempting to exclude any users through prices or any other means he laid the foundation for open source and free software. The notion that an almost entirely decentralized process of production fueled by goodwill can be so successful against more traditional production models is surprising, but entirely true. The whole network isn’t small either, 70% of web server software relies on the free Apache Web server software. Clearly free software cultivates superior products and services in the case of computer usage, but why is that? And why does it work?
Open source software is a honed factory for revised and refined programs. This model based on “small incremental improvements to a project by widely dispersed people” has the edge over any centralized program production because there are almost no constraints for how many times a product can be revised. Even if the revisions are small or even take a step backwards, on the whole the product turns out better because of the mass participation by programmers who have a genuine interest in improving the world of computing as discussed on page 2 of Yochai Benkler’s in depth look at social production. The motivations for contributing to open source software are practical ones, but not so much principled ones.
Fundamentally, open source software and free software aim to do the same thing. However if the motivations for contributing fall into the latter category of the afore mentioned reasons, then the contributions support the free software movement. This movement supports the betterment and free use of software because it would be socially wrong to withhold this from some or all of the public. There are a number of debates about which movement is better for computer usage, but both movements are aimed in the same direction. In fact open source software was developed in response to free software to clear up the ambiguous term “free” and to objectify the goals of more democratic program development. Open source software distanced only later itself from free software because as an ESR official put it in an interview in 1998, “in the battle we are fighting now, ideology is just a handicap. We need to be making arguments based on economics and development processes and expected return.”(Salon.com)
While proponents of free software and open source software argue over the benefits of more pragmatic or more social foundations for their movements, both are having positive effects on the media world. Somehow these decentralized and highly democratic production models are thriving in a world of capitalism, and that is the overlying good of both open source and free software.
Outside sources consulted:
http://cs-exhibitions.uni-klu.ac.at/index.php?id=224

Order and Chaos

Note: I wrote this before reading the Benkler’s The Wealth of Networks and Tuesday’s class discussion.

The class agreed that the Internet represents anarchy, a term that conjures images of disorder and chaos. However, I consider the Internet anarchic only in the literal definition: an absence of a ruling authority. Contrary to the preconception that a ruling body is necessary to keep control, the success of user-managed sites shows that self-regulation can produce order, quality, and stability with minimal oversight.

Slashdot, a news technology website (http://slashdot.org), is an example of self-regulation. Although paid editors decide which story submissions appear on the front page, the discussions following each story are moderated exclusively by users. It is these discussions that are the heart of Slashdot. Slashdot subscribers are randomly granted “mod points” which allow them to give a figurative thumbs up or thumbs down to other users’ comments. A score of -1 through 5 is computed for each comment, and readers can choose to only see comments scored at or above a set threshold. Thus, within minutes, insightful and thought-out comments float to the top, whereas pointless and formulaic ones sink into oblivion. While each user’s judgment of worth reflects his personal biases and preferences, the score of a comment averages out to reflect its worth as deemed by the readership as a whole. No central editorial body can replace this collective process. Not only would its judgments fail to reflect the community opinion, but the sheer volume of comments would preclude reading every comment to determine its worth.

Wikipedia, a user-edited encyclopedia (http://en.wikipedia.org), is another bold experiment in digital anarchy. Although the project has bureaucratic bodies such as the administrators and an arbitration committee, most of the project’s stability comes from self-regulation. If a vandal replaces the photo of a political figure with an image of a penis, another user will revert the change. If one user adds information that is factual but uses biased language, another will rephrase the bias while retaining the information. Although some central authority is needed to make decisions in extreme or controversial cases, accepted editing norms and peer feedback encourage helpful contributions subtly guide the project to improvement.

Wikipedia functions by evolution rather than by intelligent design. Instead of central planning, article edits are small mutations, and edits that improve the article remain. While an article may fluctuate unpredictably with additions, changes, and deletions, these edits average out to a gradual improvement of the article and the entire project. We saw this in the time lapse video of the London bombings article, a “breaking news” article on a controversial event, but the same process is at work in the improvement of more typical articles over the long term. Out of chaos emerges order.

The regularity of statistics that stabilizes Wikipedia manifests itself in many physical processes. A gas consists of countless erratically bouncing molecules, yet its behavior can be described with remarkable accuracy with macroscopic physical laws (PV=nRT). Or, to steal another metaphor from the book Godel, Escher, Bach, an ant colony, a structure that is stable and evolutionarily fit, consists solely of seemingly mindless ants scurrying around at random.

These ants are the “faint traces of the voices of various anonymous authors and editors” in Jaron Lanier’s vision of Wikipedia, and the colony is the body of information formed. As an encyclopedia, Wikipedia is expected to be neutral, informative, and relatively dry reading, making it a project perfectly suited to design by community. Although an article written by a collective lacks a coherent voice, it gives a fairer and more extensive treatment of a subject that any individual can possibly provide. On the other hand, a Wiki project to collectively create a painting or write a play would surely fail, because such artistic endeavors require the voice and creative genius of a single mind.

The law of averages also fails when the individual biases of users average out to an overall net bias. On Slashdot, moderation often reflects an anti-Microsoft mindset. But this is the predisposition of the readership and readers enjoy an otherwise-unremarkable comment that bashes Microsoft, the moderation accurately reflects the comment’s received value.

On Wikipedia, however, systemic bias is more of a problem, since it is unacceptable for an encyclopedia to be skewed towards covering topics that interest the demographic of Wikipedia editors: young, liberal, computer savvy English-speaking white males. As the reach of the Internet expands and the information gap shrinks, such inequities will gradually disappear.

The Confines of Peer Production

Yochai Benkler basis his article, “The Wealth of Networks,” around a central statement that the social peer production network is beginning to excel beyond that of traditional industrial organizations. In many respects, this is undeniably true. Wikipedia, a striving peer produced online encyclopedia, has become a viable source of knowledge openly supplied by the collective minds of the common. The effectiveness of peer production is further supported in the example of open source software; in which a large collective body collaborates, presents ideas, shares knowledge, and ultimately assists in deriving successful products. However, these examples, though valid, are all centered on problems that contain only concrete variables, with a concrete solution.

In the case of Wikipedia, it is explicitly stated in the Policies of the website that information inputted by an individual must be factual and unbiased. Therefore, Wikipedia is a collection of concrete knowledge. In analyzing open source software, this continuing theme of concreteness is present. When someone decides to edit some freely distributed open source code to create a new product, they first decide what they want their spin-off of the software to be able to do, and in what ways they wish for it to defer from the previous version of the software which they are modifying. Though this is clearly subjective, this portion of the production is ultimately the decision of an individual or small collaborative group. Once they have their vision in mind, they look to community input for help in turning their ideas into actual code that can be executed by the computer. This is the point in the production in which the “peers” have direct influence, and the knowledge they provide is strictly factual and concrete. There is no bias when it comes to coding a particular sequence of actions to get a desired result, there is simple an absolute solution. Therefore, the “peer” influence on the production of open source software is clearly composed of the sharing of factual knowledge that results in a concrete solution, and thus open source software is highly successful.

It is when you throw in variables that lend themselves to opinion, bias, and lack of an absolute solution that peer production would lose stability. For instance, if you gathered a million people together and asked them all to contribute to the construction of a single elaborate and functional building complex, chaos would erupt as ideas awkwardly meshed due to differing emphasis and opinions on style, functionality, and so forth. However, if you gave this same group of a million people the detailed blueprints to a building and told them to construct this building, they would be able to swiftly produce the desired results. This is simply because, in this second example, the only remaining variable left to “peer production” is the physical assembly of the building. This has a single defined, concrete solution which can be solved through the utilization of the collective factual knowledge on the subject possessed by the mass.

Although Benkler’s article seems to bind the effectiveness of peer production to the confines of the virtual world, it is not this that limits its ability to be successful. The key defining factor in determining whether or not peer production will excel as a viable solution to a problem can be found through examining the nature of the solution; concrete vs. interpretational.

Microsoft is greater than Open Source

True peer production, in the sense of a community of people creating a final product in the same way Wikipedia is brought together will never happen. As open source becomes more and more prominent, we will see it begin to fall apart due to it's lack of a unifying goal.

Nothing binds the people who create open source software to a specific model. While Benkler constantly states it as almost an advantage that people are willing to share and contribute for non capitalistic reasons, this model is bound to fall apart due to scattered goals. Take the current builds of Linux as an example. With competition among the already small Linux user base over different forms of Linux, it prevents a unified model of the software (such as Windows has). As a result of this, no universal and comprehensible “installer” has ever been created for the Linux OS, creating a huge wall for any end user who wants an easy switch to a new operating system. Different distributions have created different solutions for this problem, resulting in a mishmash of different file formats and general issues. Compare this with the central idea of Microsoft's installation system and the necessity of a central vision soon becomes apparent.

Benkler cites Nasa clickworkers as an example that an open network can work wonders for cheap efficient research. However, the research required people clicking on craters in a picture of mars. There was no unified objective needed – the end user only had to perform a simple manual task to accomplish the goal. The Mars experiment is basically the perfect example of an open network – you really can't screw anything up, and anyone can perform the task. Actual Open Source software isn't the same.

Wikipedia, although more involved than the clickworker project, isn't the same as open source either. The end user just has to contribute information to an already existing database (with a set mission). Wikipedia is successful because the users do have a unifying goal; they are more like colleague's at an actual encyclopedia, confirming and updating other people's entries, then random contributers to a project.

Open-source software is amazing. The ability of people to contribute and create a usable, functional, and free piece of software is astonishing. I actually typed this entire essay using open source software. It runs well and it's stable. However, it's funny to notice that so much open source software rips off of company's designs. Open office is basically designed to be a free and functional version of Microsoft's office suite. The toolbars in Open office basically mimic Microsoft's design and all functional features, including the interface and spell checking feedback is Microsoft's creation. And the GUIs of so many distributions mimics the perfect start menu, task bar, window design of Microsoft. The Athena computers around campus all have an eerie Windows feeling to them.

Because Windows is designed by a company with an aim to sell products and a typical top down formation, it is able to maintain a unified goal and create. For innovations of the future, it's still going to be companies with incentive to innovate that move technology forward.

Free Software is a Joke

“Free software projects do not rely on markets or on managerial hierarchies to organize production.”

“Free software offers a glimpse at a more basic and radical challenge…What we are seeing now is the emergence of more effective collective action practices that are decentralized but do not rely on either the price system or a managerial structure for coordination.”

Free software is a joke. It was used to make a statement, and regardless of whether or not it worked (it did, of course), it isn’t grounded in the true nature of what it seems to imply. According to Yochai Benkler, the foundation of free software is described accordingly, “Participants usually retain copyrights in their contribution, but license them to anyone—participant or stranger—on a model that combines a universal license to use the materials with licensing constraints that make it difficult, if not impossible, for any single contributor or third party to appropriate the project.” Is this the goal of a true ‘peer-to-peer’ network of minds working to create a piece of software everyone can use? It never was, and it most likely will never be. Free software grew from “political conviction”. Don’t be confused, the statement of free software was great. Richard Stallman was visionary in using the system to create “liquidware”. Software was previously developed by large corporations who restricted access to a program in order to charge money to use it. The reason Stallman’s work should be deemed “liquidware” is because he generated software that could flow and change as much as people were willing to modify it, yet it still had to be held in a container, a framework if you will. Yes, the programs are monetarily free. Yes, the software can be changed by anyone who is willing. But what is the real purpose of free software? Is it to make the best program ever imagined? Hardly. In essence, the free software movement was a stab at software development companies. It was Richard Stallman saying to the Microsofts and IBMs of the world, “We don’t need you.” Profound and driven, this movement was no more than a petty grudge held by a very smart man.

However, the free software movement did pave the way for the real peer production mechanism, open-source. Benkler talks of three men. The first creates a functional but simple program. The second asks for a new feature and/or reports a bug. Finally, the third modifies the program according to the second’s requests, and so on and so forth. Benkler says, “This collaboration is not managed by anyone who organizes the three, but is instead the outcome of them all reading the same Internet-based forum and using the same software, which is released under an open, rather than proprietary, license.” Finally, the souls of corrupted and unused computer programs can rejoice over the fact that there will, from now on, always be the hope of being improved. Open-source may be seen as a revolution of free software, but in actuality it is more of a Renaissance. There was no more fight. No longer was there a point to be proven. From now on, free software, open-source software, will be about the software itself and not about what or whom it represents.

I Feel Important

Recently I had the great fun of making a short video with a friend. As soon as we finished the fifty-nine-second piece, we immediately posted it to Youtube.com. It was so exciting to be making a video because we were capturing a moment of ourselves onto a permanent record. The moments we created on video would not be lost forever like most moments in our lives. However, exciting as it was to make the video, it was all the more exciting posting the video to Youtube.com, a video-sharing website. As the video was uploading to the Youtube website, I felt the anticipation for the uploading to finish welling up in my heart. It is undeniable that for a media uploader, there is something magical about posting media. There is something extremely exciting about interjecting your work, your art, your life, your ideas, and your very self into the World Wide Web. I indeed understand this excitement. I am a media uploader.

There are two wonderful sentiments that come from video uploading. One is the sentiment that the moments you captured on video are now available to watch and rewatch anywhere and anytime in the world. The other sentiment is that once you have uploaded your video, you have made a mark on the world and you matter.

Just as “snail mail” was sucked into the World Wide Web in the form of email, video has been sucked in as well in the form of media posts. Email is great because it is quick, omnipresent, and easy to do (you can email at anytime, any place with just a mouse click!) Video uploads are the same way, and this is why they are also wonderful. Media uploaders essentially are liberated from the physical limitations of DVD's, and people can see others’ works without having to obtain and insert a circular, shiny disk into their computers. The only way to enjoy video is by having it be seen. With video uploading, these records are given an honest chance to be seen and reseen all the time. Video posts will never not have a chance to be seen by someone; they will always have a chance to become meaningful to someone as long as the posts are not taken down.

When the upload of a video completes, another magical thing happens – the media uploader puts a mark on the world. He creates a unique URL address that is distinctly his own. For the video I made with my friend, the URL address http://youtube.com/watch?v=UhFF82Rx4lE is the link to our video. This address is significant because it can be easily copied and pasted into different people’s web browsers and viewed over and over. Whoever receives this link will be exposed to our video, and for those fifty-nine seconds that they open the link, they are focusing on what we wanted them to see. I feel somewhat powerful. Besides having an address in the World Wide Web, what makes our video even more accessible to the world is the fact Youtube.com allows our work to be tagged, searched, and categorized. Anybody who clicks on the tags associated with our video will have a chance to see it as well. If someone clicks on the tags “jonjon,” “acf,” “lab,” or “mit,” they will find a list of videos to choose from, and my video will be on that list. Also, if someone searches for videos using the keywords “lab” or “jonjon,” our video will be a search result. Finally, if a browser looks for videos under the categories “Science and Technology,” he will also stumble across our video. This makes me feel really important! If someone clicks around on Youtube.com in just the right way, I might be able to engage him for fifty-nine seconds. Not only that, I will also know when someone has viewed our video because Youtube keeps count of the number of times videos are viewed.

It is really exciting knowing that there is a possibility that my friend and I might engage and influence someone with the fifty-nine seconds our video is playing. This is a fifty-nine seconds of power we would have never had the chance to own before the media-uploading age. We might make someone’s day happy; we might change around someone’s boring day at lab; we might inspire another video to be made; we might change someone’s opinions; we might get someone to think Science is cool. Although these are just “we might” statements, I am optimistic enough to think that our video is actually powerful. I actually do feel important.

Free Software vs. Open Source

When this paper was assigned, I was reluctant to write a 500-word paper on a subject that at first seemed to me very ambiguous. They both defined the same concept and there was no nuance whatsoever between the words. They were merely two redundant jargon words referencing the same idea that seemed to me obvious and did not need any further explanation. When I typed the words in the Google search bar, the number of hits surprised was surprising. The difference between Free Software and Open Source was at the heart of many forum discussions and was the subject of numerous papers. This enthusiasm gave me the motivation to deepen my research on the subject. I was eager to see what could be so important about the distinction between Free Software and Open source to animate so many people at that point.

The first interesting finding is that there exists a legal and official definition given to the two concepts. Free Software denotes the power and freedom of the user to run, copy, edit study change and improve a computer program. This freedom refers more precisely to: “the freedom to run the program, for any purpose; the freedom to study it, and adapt it to your needs; the freedom to redistribute copies; the freedom to improve the program, and release your improvements to the public. While Open source describes practices in production and development that promote access to the product’s sources. To mention a few, these include free redistribution, integrity of the author’s code.

If the two concepts are rigorously defined and distinct, where is the relevance of this paper? The answer to that question is simple. Free Software and Open source have evolved into real philosophies and the confrontation between the two movements has become fierce. The Free Software movement and the Open Source movement are like two political camps within the free software community. In 1960, the schism created radical groups promoting factionalism what led to the failure of a number of organizations and conventions because of the disagreement on the strategy to implement. They were like two enemies, even though they share common thoughts on practical recommendations and more importantly she the same enemy: proprietary software. Each camp desires to demark itself from the other one. Each camp brags to the world about their unique contributions to the community. Each camp goes at fiercely demonstrating the problems that the other one does not solve or even creates. A partisan of the Free Software movement even writes: “We want people to associate our achievements with our values and our philosophy, not with theirs. We want to be heard, not obscured behind a group with different views. To prevent people from thinking we are part of them, we take pains to avoid using the word ``open'' to describe free software.”

These words carry a lot of animosity, and to be honest, I can see no reason that is strong enough to justify this radicalism. The two movements should work together in achieving the same goal. They are far from being incompatible they are complementary. They are two camps in software communism; one is just more at the left than the other one.

Technology and Sharing

In Yochai Benkler’s The Wealth of Networks, he discusses the value of sharing goods and abilities in an economic sense. From a purely economical view, giving goods can result in a net loss of utility. It is generally understood, however, that giving goods is beneficial because most people derive pleasure in sharing their gifts with others. Benkler takes these concepts further and argues that as technology progresses, sharing will become more prevalent. I think that this is a very sound argument. There are many examples from current technology that back up this idea. For example, music is shared rather freely and it is remarkably easy to get music from other computers over the internet. This same file transfer allows many people to download videos and television shows. These examples show how sharing has evolved as more people become more technologically advanced.

From a strictly economical perspective, giving gifts can often be interpreted as a deadweight loss. The giver spends a certain amount of money on a product that they then give to a recipient. The recipient, however, might not get as much value from the gift as the giver spent on it. If this is indeed the case, the gift is a deadweight loss. With technology, this becomes less of a problem. If a person posts a song or a video clip on the internet, the only marginal cost to that person is the time it takes to post the movie and the small chance that they will be criminally prosecuted for copyright infringement. This is a very small marginal cost, and many people derive benefit from knowing that they are helping other people get similar enjoyment from these files that they got. This small marginal benefit actually can outweigh the marginal cost of posting certain files on the internet. People who then download these files decide that the marginal cost of time and risk of viruses is smaller than the marginal benefit of getting a file over the internet will download this file. This connection between giver and receiver is only possible with the internet as a catalyst. The technology that allows people to easily post and download material is, in fact, responsible for this culture of sharing that is beginning to pervade the lives of many Americans.

Several years ago, there were many fewer people capable of sharing files over the internet. Not as many people knew how to post files or how to download files safely. As different websites (i.e. Limewire) create better user interfaces, more people will continue to use websites to exchange songs and videos. As Benkler claims, this improved technology directly results in more people sharing.

Another advantage of the internet is the capability to share information or advice. If people have problems with their computers, it is fairly easy to search online for advice or for information from other people who have had similar problems. This ability to exchange information is one of the great advantages of the internet. In my own experience, I have often searched on Google for information relating to computer error messages that have appeared on my laptop. It is incredibly convenient to take advantage of the information that other people can share.

New technology has certainly made a large impact on sharing information and electronic data. It is easier than ever to find other people who can give you something that you need or that can use something that you have to share. The internet can connect people more easily than ever, and sharing will continue to become more prevalent as technology develops.

What Makes Peer Production Tick?

Yochai Benkler argues that the peer production model outshines the aging industrial model of the past. He explains that companies using peer production are wildly successful, sometimes more so than even the most stable and powerful industrial organizations, as in the example of the Apache Server, a free, peer produced web server used by some of the the largest, companies around. However, there is a difference between peer production of software and erecting a skyscraper. Most would be quick to point out that a peer produced building would fail. But what makes peer production in the physical world less effective than in the virtual world?
Some may argue that resources could be the difference between peer production in the physical and virtual worlds. The industrial model relies heavily on the markets for funding and supply: people work for the company to get paid and the company charges people for its goods and services (which pays for the supply and employees). In the case of the peer production model, the only thing a person needs to help create a better piece of software, or to help map the craters on the surface of mars in Benkler's case, is time--there need not be a tie to markets. With real world projects, there are physical resources that need to be collected and organized. The important thing is: does this really make the difference? I would argue not. In order to host the vast amounts of information contained on some of the open source software development sites, there has to be physical backing behind it.Thus, the sites turn to advertising, sponsorship, and voluntary donations for up-keeping the servers and other equipment. Sure, maintaining servers is a lot cheaper than constructing a building, but the point remains: Resources make less of a difference than one would expect.
The most defining part of 'peer production' is that the product is made by peers. It follows that the effectiveness of a peer produced project correlates directly to the amount of people participating in it. The only restriction that the peer production has is the amount of people that serve as its collaborative base. Without this 'critical mass' as Benkler described it, there can be no effective change in the system. I argue, that resources are not an issue; if enough people want something, they are likely to help contribute in anyway they can; the matter is getting enough of those people. With such a large group, it becomes easier to make decisions also, as people realize that they will need to have people represent them. Slashdot, for example, filters irrelevant and incorrect information out of the system by having moderators that get their power through the fact that they have demonstrated to the group their trustworthiness and dedication to the system. This model mirrors the system that the representative government of the United States Strives to emulate. Slashdot is often more effective because of the volume of people that have access to, and control over, the site. Peer Production is a more effective system than the top-down industrial system, but only if there are enough people.

Quentin Smith - Freedom to Open-Source

Most users of Linux are just glad to have an operating system that they downloaded for free. They probably don’t even know that it’s not just Linux that they’re running but GNU/Linux, the combination of the Linux kernel and components of the GNU system. For Richard Stallman, the founder of the Free Software Foundation, creator of the GNU General Public License, and inventor of the concept of “free software”, the distinction is an important one. Free software, he often says, is not about free beer but about free speech. A similar concept called open-source is a less politically-charged variant that leads to the same end result.

Richard Stallman was motivated to leave his position at MIT’s AI Lab and found the Free Software Foundation when one day, as he was working in the lab, he found a bug in the software for the lab’s printer. The printer manufacturer, however, refused to release the source code for the software, so Stallman was unable to fix the problem himself. Stallman invented free software and the “copyleft” paradigm to ensure that his and other like-minded developers’ software would be free to everyone.

Free software is derived from four freedoms that all users should have with software. Freedom zero is the freedom to run the program for any purpose. Though this seems basic, there are some developers who have restricted some people and groups from using their software. Freedom one is the freedom to study and modify the program. It is this freedom that the printer manufacturer withheld from Richard Stallman. Freedom two is the freedom to copy the program so you can help your neighbor. If a problem has already been solved, there’s no reason you can’t help apply that solution to another person’s similar problem. Finally, freedom three is the freedom to improve the program, and release your improvements to the public, so that the whole community benefits. These freedoms establish the reasoning behind free software licenses, and especially Stallman’s own license, the GNU General Public License.

The label “open-source software” came out of a meeting in Palo Alto discussing Netscape’s decision to release the next generation of Netscape Navigator as free software. Free software proponents, including Eric S. Raymond, author of The Cathedral and the Bazaar, realized that free software and the Free Software Foundation had become too political. Free software was concerned with ideological issues like freedom and liberty. Open-source was a term that didn’t clash so severely with the ideas of proprietary software. Open-source was a nicer way of marketing the same concept: that users should have access to the source code of the software they use and should have the ability to modify and redistribute that software.

It’s important to note that neither free software nor open-source implies that the software itself has zero cost. Publishers are free to charge for copies of their software; they are, however, obliged to make the source code available, and they cannot prevent the user from redistributing the software for any amount of money. Open-source software can be sold commercially, and in fact, some companies do just that. Red Hat sells copies of Red Hat Enterprise Linux and makes the source code freely available to its users.

The free software movement has come from humble beginnings to become a clear and well-functioning alternative to the world of proprietary software. Commons-based development results in products that can match and even surpass the quality of commercial software.

Tuesday, September 26, 2006

Creativity Within Structure

Traditionally, poetry is structured. Almost all sonnets follow the same rules. They are composed in iambic pentameter and have three quatrains and a couplet. It is within these clearly stated parameters that creativity thrives. Given freedom, many times poems disintegrate. Free verse poetry is too often disjointed. The extra degree of choice given to the poet in line and stanza placement makes much of the work jumpy and unclear. The same could be argued for the online, publicly edited encyclopedia, Wikipedia. While Wikipedia accepts input from anyone with access to the internet, the format in which contributions must be submitted is much more stringent and it is for this reason the website thrives.
Wikipedia is designed to be welcoming to all. Users do not need to know HTML or any other type of code to be able to edit and manipulate the contents of a page. Nor do they have to be fluent in one specific language. Originating in English, the site now has over one million articles in other languages, eliminating even typical communication barriers from the equation. Wikipedia emphasizes the importance of the opinions of all by allowing even those without an account to submit changes. In these instances, edits are identified simply by the IP address of the computer from which they were made.
While Wikipedia is an equal-opportunity forum, the rules within the website are unyielding. There are three main rules of engagement: all articles must be written from a neutral point of view meaning that all sides of the argument are presented without bias, all facts presented must be verifiable through other reliable sources, and similarly one cannot present original research on any of the Wikipedia pages.
As is the case with poetry, it is with constraints that much of the best work is produced. Free verse encyclopedias would be difficult to read and understand. Without rules regarding sources of information, facts gleaned from Wikipedia would have no value because there would be little assurance that they were correct. Wikipedia is not a collection of opinions, but rather one of substantiated ideas. Non sequiturs and vandalism are quickly corrected by the masses. The rules that everyone plays by foster the overall high quality of the production.
The strictly enforced rules present on Wikipedia add a level of professionalism to a forum that could have easily become an outlet for hearsay. Fortunately, these procedures do not limit the wide variety of creative content present in Wikipedia’s online volumes. Wikipedia is a database of the new millennium for its pages contain not only the dry material traditionally featured in encyclopedias but also stories behind terms defined no where else. I found one such term during a recent search of my last name on Wikipedia. One article came up on the subject of ‘Double Fanucci.’ Reading the article, I learned not only about the first computer game, Zork, in which Double Fanucci was an infinitely complicated card game, but also some of my own history. The dates of the game’s development at MIT were concurrent with those of my father’s graduate education at the institute. I asked the apparent namesake if he could regale me with any stories about the creation of the game. And he did. Thank you, anonymous Wikipedia users, for teaching me a little something about my roots.

The Semantics of Being “Open” or “Free”

Since the conclusion of the Browser Wars between Netscape and Microsoft, there has been a schism in the community of developers sometimes referred to as the “open source” community, and sometimes as the “free software” community. The split is over exactly the ambiguity just raised in referencing this community: should the community of developers adopt the name and, presumably, philosophy of “open source” or “free software”? Since, by its nature, the community is highly decentralized, this issue will remain unresolved for the indefinite future.

On one side, the Free Software Foundation declares that it is morally imperative that each user of a piece of software have four rights outlined on one of their websites, gnu.org:

  • The freedom to run the program, for any purpose (freedom 0).

  • The freedom to study how the program works, and adapt it to your needs (freedom 1). Access to the source code is a precondition for this.

  • The freedom to redistribute copies so you can help your neighbor (freedom 2).

  • The freedom to improve the program, and release your improvements to the public, so that the whole community benefits (freedom 3). Access to the source code is a precondition for this.

The stance of the FSF is strong and flies in the face of recent interpretation of intellectual property law. More and more recently our industrial and legal regimes have favored absolute control for the creator over their property. The GNU General Public License (GPL) twists this strong control back on itself, by obligating users of the software to let their changes be made available. To be clear, the FSF does not believe that any vendor who distributes software should be able to lock users in and continue to charge for their service. The point of software being free is to put the users in control of what directions their software takes.

The Open Source Initiative, on the other hand, is about using the networked, peer-production model that free software had created, simply because they believe that the open source development model produces superior products. As stated on their web site at opensource.org:

We realized it was time to dump the confrontational attitude that has been associated with "free software" in the past and sell the idea strictly on the same pragmatic, business-case grounds .... We brainstormed about tactics and a new label. "Open source," contributed by Chris Peterson, was the best thing we came up with.

The open source movement attempts to jettison the moral and political baggage that the free software movement feels is essential and not superfluous. They do not seek to remedy a moral crisis in proprietary systems. They may believe that free or open source software is more democratic by nature, but they do find it necessary or effective to make those statements. As such, the FSF and Richard Stallman, the founder of the FSF and father of the entire model, have chosen to discourage the use of the term open source and promote a higher moral ground.

To contrast the two ideas in a nutshell, open source is about network information economies, peer-production, and efficiently distributing labor, while free software is about making computing more democratic, and freeing software from corporations whose only concern is profit.

Revenge of the Software Hippies

When Linus Torvalds began to develop his world famous open-source operating system, Linux, he probably never imagined that businesses worldwide and organizations including NASA would rely on his project to keep their servers running smoothly 24/7. Bill Gates probably never expected that in the eyes of millions, his world famous Windows was only 2nd best in the face of an operating system that didn’t cost a dime. This is one of the many triumphs of peer produced content that exist in the world today, and while this form of production has its limitations in creative power, its ability to distribute tasks into infinitesimally small workloads to millions of volunteer programmers from around the globe has allowed it to revolutionize media in the digital age.

Many have wondered why peer production really only appears in digital media and not in the physical world; the answer lies in the iterative nature of peer production and its limitations in efficiently using “materials”. In his novel, Benkler describes the basic three-person peer-production team consisting of “the first author, who wrote the initial software; the second person, who identified a problem or shortcoming; and the third person, who fixed it”. This is an example of the iterative process in which peer content is created. Unlike in the construction of a building, potentially infinite “drafts” or updated versions of a piece of open-source software are created, and thus many more bits of data are used in its creation than are present in the final product. It is in this process that peer production finds itself restricted to the digital realm, where the resources of man-hours and free space for data are nearly infinitely available. Such a structure would never thrive in the physical world, as projects have overhead costs that are too high to warrant multiple revisions of production. Hence, the physical market is limited by the creativity and resources of the individual companies that create products.

Peer production also finds limitations in the ability to create innovative content. Suppose for a moment that a person requests that a community of peer software producers on the internet create a game for him. Hundreds of different perspectives will surely clash at this point, as no one idea can be formulated. However, if leadership is established and a single version of the software is created, the peer production group immediately becomes hyper-efficient, locating and resolving issues nearly instantaneously. Perhaps the inability for this system of production to work creatively is a weakness, or perhaps the system was merely intended to refine and not create at all. All that is required for this team of workers to function is a creative spark, a seed that receives its water and nutrients from farmers who come from every corner of the globe, seeking merely to help cultivate the seed into a garden.

Peer production has, in its glorious uprising, unearthed a side of mankind that lay dormant – the “good neighbor” in every volunteer programmer has revealed itself in the form of software development forums like Sourceforge, where the simplest of questions to the hardest of problems are handled by total strangers for no cost, simply out of good will and the desire to see more, better functioning software in the world. The combined efforts of these volunteers has resulted in exactly that: software that rivals that of giant corporations and, in many cases, is deemed more reliable.

Digital Speculation

Unknown to the general populace, the formation of a new aristocracy is in its fledgling stages. A small group of entrepreneurs have plans to create an environment in which thoughts, ideas, and creativity are no longer free. Paradoxically, the cornerstone of their plans rests on the very institutions created to protect such intellectual rights – copyright laws.

Who are these menacing entrepreneurs and what exactly do they plan to do? Surprisingly, they are a group of highly regarded physicists, engineers, computer scientists, biologists, and chemical engineers; in fact, MIT’s very own Robert Langer is part of the group. These academics work for Intellectual Ventures, a company that was founded six years ago and has managed to keep a low profile since. The company essentially pays scientists to daydream and come up with various ideas for inventions. The company does not implement these ideas, but rather maintains a quickly growing patent portfolio. The business then sues people who try to implement these patented ideas.

During the industrial revolution, there were few people who controlled the means of production – that is, the capital, resources, and infrastructure necessary to produce manufactured goods. Companies such as Intellectual Ventures patent software designs, computer algorithms, and ideas of media use; however, they do not develop such ideas. Such digital speculation might create a new type of aristocracy. Once again few people will have access to the means of production, which now take the more abstract form of copyrights.

Legislation such as the Millennium Copyright Act prevents information, software, and media usage from assuming their natural position in the world as free entities. The natural trend for software and media is to become free; this is evinced by the proliferation of open source communities and the increased popularity of person-to-person networks such as LimeWire.
I believe that suitable changes in copyright law must occur in the future in order to prevent the stagnation of technological production caused by a small patent-holding aristocracy.

Hopefully all software, media, and sources of academic information will fall under either the Creative Commons or GNU free documentation license. In this way, the community is largely free to edit and improve the original creations. The Creative Commons license allows licensed work to be distributed easily; yet, the creator still maintains many rights. For example, the work may be redistributed in a confidential format (not open source). Also, some creative commons licenses prevent other people from using work for commercial purposes. The GNU free documentation license is more liberal in the sense that it gives readers the right to copy and modify the work (even for commercial use).

The copyright system is an old system that needs radical reform because it was not created under the same conditions that exist today. Digital media, especially software and the corresponding computer code, should not treated as proprietary. It is ridiculous when a company which does not develop technology, but instead is only interested in buying patents and thereafter suing, prevents the community from working on certain projects. In such scenarios, patents and copyrights are detrimental to society because they inhibit the creation of new software and technologies.

Monday, September 25, 2006

Response to Online Collectivism

Twelve years ago, I was doing my third grade class project on Benjamin Franklin and the Declaration of Independence. Before I could start writing facts and sketching pictures on the poster board, I had to go to the public library and search through boxes of index cards, locate the relevant books, and check them out. The whole process took about five hours.

If I was given a similar task today, I would most likely turn on my laptop start up my internet browser and type in “ben franklin declaration of independence” in the Google search query. I could then scroll down the list of results until I found a page from Wikipedia.com. Within seconds I would have the necessary information, dates and statistics that I would need for my project, including graphics that I could just drag into my PowerPoint slideshow.

I agree with Lanier that the popularization of online collectivism, peer editing and the ability to instantly access information has its hazards. However, I think that the digitalization of information is just a natural evolutionary process and its dangers can be reduced with careful management and usage of the resource - which is really also true for pre-internet resources.

The reason why information printed in a hardcover book can be more legitimate than information found on a webpage, is because it takes a much thorough process to get something published on paper than it does to release on the web. The former usually requires many steps and levels of editing and approving before its official release, whereas the latter may require only a computer connected to the internet.

To avoid false information, users should learn to identify the credible sources before trusting information provided. For instance, after I type in my search query, I usually automatically scroll pass results from personal sites that may be biased or inaccurate, I have learned to stick with the experts of the trade. If I am looking for information on a movie, I look for results from IMDB.com; I turn to CNN.com when looking for credible reporting on recent events; ESPN.com for my updated sports coverage…etc. In my recent experience, I have also began to trust Wikipedia (an online encyclopedia edited by users around the world) the same way I would trust information on the pages of Encyclopedia Britannica. I have learned to check the references on each page and verify the facts.
There are usually tradeoffs between good functionalities. In the case of online collectivism, I believe that the benefits that digital media has brought dwarves the small hazards that may arise from bad information. As an internet community we need to take responsibility in information that we share on our web, and also be dedicated to verify sources of information.

Friday, September 22, 2006

A User-Refined Experience

I have always wanted to know the average speed of a cheetah. To find a legitimate answer, I have three options. First, I could walk all the way to a library and open a written encyclopedia. I could also pay money for a subscription to an online database such as Encyclopedia Britannica. The most convenient method of all, however, is a simple visit to wikipedia.org, where the information I need is available instantly and for free. It is for this purpose that Wikipedia was designed, as a source of widely available, free information – whether or not this information is to be used in an academic context is a different debate entirely.

Although Wikipedia calls itself an encyclopedia, it in truth can never be a true encyclopedia, as its accuracy fluctuates and is dependent on edits from millions of users. However, that is not to say that it is devoid of accuracy and truthful information simply because there is no single author whose name is attached to each article. The facts for nearly everything are most likely to be true. Could you think of a reason why anyone would want to purposefully post an inaccurate average speed of the cheetah? Of course in theory such people exist, but Wikipedia itself operates on the assumption that people who care about maintaining and protecting truth of information exist as well.

Lanier criticizes the fluidity of the information on Wikipedia, citing a personal example in which he is portrayed as a filmmaker. After encountering one piece of erroneous information, he has extrapolated the idea of inaccuracy to encompass the entire database, when in truth he has no way of judging the accuracy of Wikipedia as a whole, as its comprehensiveness is beyond that of any established encyclopedia and therefore its validity is really a matter of trust – trust in the millions of other users and their intents in maintaining truth on the internet. It is in this area that Wikipedia does have its limitations. In academia, information must always be connected to individuals, as one’s reputation is dependent on the reliability and accuracy of one’s sources. In order to be fully accountable, one’s sources must be tied to other academicians, whose works cite others, and so on. Wikipedia fails in this respect, as it humbly provides information without linking it to individual authors who are accountable for the information on each page. However, Lanier fails to realize that the structure of Wikipedia prevents it from linking to authors, as there are so many edits to each entry that it would be impossible to hold any single author accountable for the entire entry. The focus of the website is the same as that of any open source piece of software. Sure, it may have bugs, and if it does, there is nobody to blame. However, the individual who finds a bug has the opportunity – rather, the obligation, to fix the error or make it known to individuals who can fix it. Lanier calls this structure “maoist”, noting that it “removes the scent of humans” from informational works. One must remember that each entry in Wikipedia is made and maintained by humans all around the world, and it is not a website designed to express opinions; rather, it is merely a source of information designed by and for those who believe that factual information should be a free domain.

Thursday, September 21, 2006

Video Game Movies = Cult Classics

Have you ever come out of seeing a movie based upon a video game and felt like something was lacking? If so, you may be one of millions of Americans who feel the same way and suffer from a phenomena known as, “a change in media.” The reason for this feeling doesn’t lie in the quality of the special effects or cast chosen to play out the story (sometimes), but it is rather the difference in the motivation for wanting to play a video game as opposed to that of watching a movie. Gonzalo Frasca’s piece “Simulation 101: Simulation versus Representation” addresses these differences in value by taking a look at various pieces of media and flip-flopping them behind the shades of formats that allow for decision-making on behalf of the participant as opposed to those that simply present information to a viewer without any form of active response.

It is a well-tested opinion that most video games that are turned into movies are shunned by the eyes of the critics and many loyal fans of the gaming industry alike. When this is the case, the beckoned question becomes, “Why should a movie based on a game fail so miserably when the game itself was loved by the masses?” The answer is the limitations of forms of media. A video game is a simulation of an idea created by the imagination of a developer, while a movie production based on a game is truly a representation of that same idea. The only difference that remains between the two formats is a governed system of choice.

As defined by Frasca’s article, “Simulation is [the] act of modeling a system A by a less complex system B, which retains some of A’s behavior.” (Frasca 3) Using Frasca’s definition, it can be said that the idea formulated in imagination of a developer corresponds to system A while a game based on the idea follows system B. For example, the Halo game series must have been simulated from an idea because as our world stands today, there are no actual Spartans, Grunts, Elites, Plasma Grenades, Needlers, Warthogs, Ghosts, or giant ancient rings that are floating around to simulate. When thinking over the behavioral rules that go into Halo, one must consider the source. The game will always be inferior to the creator’s imagination because the imagination is limitless. However, the choices that are available are vastly large in comparison to a film. A game character (if programmed to) can jump, duck, turn, fly, shoot, etc. on command, but in a movie, the freedom is gone because the decisions have already been made and are permanent. In a game, you are expected to participate.

A movie production of a video game takes the original developer’s idea an retools it in order to create a presentation that will be entertaining to the masses even with the lack of the ability to choose the main protagonist’s actions. For example, the movie Mortal Kombat is based off of the Midway classic of the same name. In the game, players select characters that face off in one-to-one fighting combat with nothing more. The extent of the story mode gives the background of each of the characters but keeps the same tournament fighting style with no other differences. However, in the movie, a story plays out, following the trials and tribulations of the protagonist Liu Kang until he finally dispels of the evil sorcerer, Shang Tsung, in an act of revenge against Tsung’s capture of Kang’s younger brother’s soul. The point is that while the game may have made mention of Liu Kang’s goals for participating in the tournament, choosing him as a character was never a decision forced on the player. The movie had to make the decision for the audience because there are no means for a representation to govern the behavioral rules of choice. In a movie, you are expected to sit and enjoy.

Applying Frasca’s definition to movies based on video games makes it easy to recognize why few become more popular than cult classics. The motivations for seeing a movie are quite different from those of playing a video game when the entertainment is broken down.


Collective Makes the Individual

I appreciate Japanese ingenuity, particularly their digital accessories. However, the strangest inventions I have ever seen stem directly from Japan, and are, quite frankly, hilarious. One involves placing a roll of toilet paper to a helmet, so in case you need to sneeze, a fresh wad of toilet paper hangs conveniently in front of your forehead. Another “innovation” is a gigantic 5-foot long Swiss Army knife, only it contains all the essential farming tools like a rake, hoe, shovel, etc. A personal favorite is a called the “Noodle Eater’s Hair Guard”, a wide-brimmed pink rubber ring that surrounds your face as you eat noodles and prevents your hair from slipping into your food. For obvious reasons, Japanese society routinely turns down these ridiculous ideas- imagine, though, what happens if the Japanese collective begin to give the creative individuals who come up with these wacky ideas some credit and actually begin using the inventions. Would we see people in Tokyo wearing toilet paper rolls over their head, with pink rubber rings circling their faces in noodle shops?

But the reverse question extends well beyond funny Japanese novelties. Where would any of the great individuals of our past have stood without the collective? Edison’s light bulb was an invention indeed, but what if the collective had, for any combination of reasons, simply rejected it and stuck with candlelight? What if Einstein’s physics had been ignored by the scientific collective, like the work of many scientists before him? If the contributions of any given individual had been lost on an apathetic collective? All the individuals that have mattered throughout history have depended upon the collective at the time to accept them and make them matter. Without the acceptance of the collective, all individuals grind down to nothing.

Over the ages, the collective mindset have mattered hugely, much more so than the opinions of individuals. Jaron Lanier’s “Digital Maoism” argues that individuals should be cherished over the collective in a response to the rise of the influence of the collective. However, in a sense, the collective has always mattered more than any individual, before and after the introduction of the Internet. Perhaps we should care more for the individuals of the world, but without the collective, no individual matters at all.

Introducing world connectivity simply makes the collective larger, and individuals still must meet the demands of this collective to become of any importance. Furthermore, in a world where the collective matters so much more than the individual, perhaps trusting in the, at times, obviously untrue collective is better than knowing the opinions of an expert individual. A strange notion, but take a look at a given wiki - its user edited content, if reflective of actual and comprehensive collective opinion, might be seen as fact, in the eyes of the collective. An example- if a Wiki article defines the word “finizzle” as “just short of fuzzy”, and the collective as a whole accepts it and chooses not to edit it, then, in the eyes of the collective that views the wiki, it is in fact true. As Laneir states, the collective is not all-wise and is quite often stupid, but if the entire collective agrees on a topic, in a sense it’s validity no longer matters; the topic becomes defined as the collective defines it and not as an individual expert would attempt to define it, since the collective would simply edit any wiki back to its own preference. In this sense, the advent of the wiki introduced a whole new dimension to the power of the collective- not only does the collective choose the individuals that matter, it now also can define anything it really chooses to, despite the efforts of individuals that may know better. Jaron Lanier’s description in Wikipedia is a perfect example of this- his entry may be incorrect, but to the collective that constantly erases any edits from Lanier, he is a film director, and the actual truth of the matter can be said not to, well, matter.

Lanier may be correct in stating that the individual should be cherished first, but, in a sad sense, the crowd, the hive mind, the collective is all that matters. As the acceptance of innovation and the constant editing of wiki files can demonstrate, individuals may impact the collective, but only if they are allowed to.

Response to "Digital Maoism"

While reading Jaron Lanier’s essay, “Digital Maoism”, the first question I asked myself is, what is Maoism? I then went online and googled Maoism. To my surprise, Wikipedia was the first site that appeared on the Google rankings. I can see why Lanier says, “the problem is in the way the Wikipedia has come to be regarded and used; how it's been elevated to such importance so quickly.” I do not fully agree with this statement by Lanier. There are no problems with the way Wikipedia is used. Wikipedia is a free online encyclopedia. Encyclopedias are books filled with articles on various topics. With Lanier’s own evidence from a study in Nature, Encyclopedia Britannica and Wikipedia are about the same accuracy. How can using Wikipedia as a valid source be dangerous if it has a similar accuracy as an actual Encyclopedia?

I feel that dangerous part of Wikipedia is not the use of it but the anonymity of the authors of the articles. Wikipedia by itself is not that dangerous though. Lanier says, “In the last year or two the trend has been to remove the scent of people, so as to come as close as possible to simulating the appearance of content emerging out of the Web as if it were speaking to us as a supernatural oracle.” Wikipedia in its current state seems to be half way to becoming a “supernatural oracle”, but it does not have the potential of becoming an all wise entity. This all wise entity that could control what people think is a scary thing. The anonymous natures of Wikipedia makes Wikipedia have an all-knowing voice, but in many of the articles the voice is just an expert expanding on they’re area of expertise. With an author you can find out what viewpoint or opinion the author could have, but with an anonymous author the opinions of the article are less valued and felt. Even though the opinion is less felt, I believe this loss does not take away from the meaning of the articles written in Wikipedia. The anonymity of the author leaves room for the author to be incorrect which then could be fixed and changed by others who have more expertise. Laneir states that Wikipedia at the least is successful “at revealing what the online people with the most determination and time on their hands are thinking.” In response to that, I would ask Lanier wouldn’t a person that is an authority in a subject be the most persistent in getting the correct information to the public? Such as in his case with Wikipedia, he is no longer referred to as a director mainly, but now known on Wikipedia as mainly a virtual reality developer. Lanier’s persistence in writing, “Digital Maoism”, made the Wikipedia collective notice that he no longer wanted to be know as a director.

I think that Lanier used his example of Wikipedia in the wrong manor, especially in the beginning of his essay. The main thought that Jaron Lanier was trying get across is the there is a downfall to collective action. Wikipedia to me is not a good example of the down fall. Those who care enough to keep Wikipedia updated and as accurate as an encyclopedia do not introduce stupid or imprecise ideas. Wikipedia is not leading to a downfall of the intelligence of the world.

Wednesday, September 20, 2006

Wikipedia: A Harbinger of a New Era

Many people will criticize what I am about to say. Indeed, the majority of the population will completely reject it. You see, if what the current state of the world can be described as a democracy, then what I am about to propose may most aptly be defined as an anarchy, not of government, but of information and opinion. Despite the obvious negative connotations, I ask you, what is really so bad about anarchy?

Before I continue, I should properly define exactly what kind of situation “anarchy of opinion and information” entails. In the current mediascape, popular demand exerts a heavy, but not total influence on the new media product. New movies are still pre-filtered by an elite group of critics, and even popular TV shows such as American Idol still cut down prospective winners to a select group before they give the public a chance to vote. A similar situation of censorship exists in information through the use of editors to choose content that appears in scientific journals, newspapers, and magazines. Anarchy removes these filtration devices, placing every piece of information and media before the public before any bias can be formed. I am not suggesting that critics and editors be silenced, only that they create critiques and edit works after they have been made available to the public. Nor am I suggesting that every piece of information created now or ever be placed in the public eye. I am merely suggesting that the ultimate choice be placed in the hands of the author, director, or inventor, not in those of the critic, editor, or censor.

In a way, it is senseless to argue for anarchy, since, to some measure, it has already been achieved. In this day and age, anyone, anywhere, can post anything on the internet without fear of retribution. It is in this virtual microcosm that we find director’s vision uncompromised and original drafts unpolished. They are received as the public perceives. This is where Wikipedia comes in. Since any member of the bourgeoisie can edit its content, it serves as the most complete record of public opinion in the history of man. Opponents of the website decry its numerous inaccuracies, yet such discrepancies are unimportant next to its empirical value (what kind of expert consults Wikipedia for burgeoning developments in his or her field?). Instead, Wikipedia functions as a harbinger, a messenger of times to come, when public acclaim determines the worth and meaning of a work. This is the essence and genius of the internet.

In essence, I am not arguing for the adoption of anarchy, but for the acceptance of it. Let us accept that the phenomenon of the internet and informational anarchy is slowly infiltrating all aspects of our daily lives. Instead of being dragged inexorably, kicking and screaming, into this glorious new era, we should recognize that it has only come this far because we, collectively and unconsciously, rejoice in the change. It is time for us to sweep aside our petty discomforts and fears, realizing consciously that this is a shift that we can truly, and will eventually, welcome.