Off the Top: RDF Entries


February 20, 2007

Life Data Streams Bubbling

Emily Chang's post about her My Data Stream brought back memories from a ton of conversations last year. I captured a few of these ideas in a relatively short Life Data Stream post over at Personal InfoCloud, which has comments turned on.

You may want to take a look at TechMeme for related posts.



June 17, 2006

Cultures of Simplicity and Information Structures

Two Conferences Draw Focus

I am now getting back to responding to e-mail sent in the last two or three weeks and digging through my to do list. As time wears I am still rather impressed with both XTech and the Microlearning conferences. Both have a focus on information and data that mirrors my approaches from years ago and are the foundation for how I view all information and services. Both rely on well structured data. This is why I pay attention and keep involved in the information architecture community. Well structured data is the foundation of what falls into the description of web 2.0. All of our tools for open data reuse demands that the underlying data is structured well.

Simplicity of the Complex

One theme that continually bubbled up at Microlearning was simplicity. Peter A. Bruck in his opening remarks at Microlearning focussed on simplicity being the means to take the complex and make it understandable. There are many things in the world that are complex and seemingly difficult to understand, but many of the complex systems are made up of simple steps and simple to understand concepts that are strung together to build complex systems and complex ideas. Every time I think of breaking down the complex into the simple components I think of Instructables, which allows people to build step-by-step instructions for anything, but they make each of the steps as reusable objects for other instructions. The Instructables approach is utterly brilliant and dead in-line with the microlearning approach to breaking down learning components into simple lessons that can be used and reused across devices, based on the person wanting or needing the instruction and providing it in the delivery media that matches their context (mobile, desktop, laptop, tv, etc.).

Simple Clear Structures

This structuring of information ties back into the frameworks for syndication of content and well structured data and information. People have various uses and reuses for information, data, and media in their lives. This is the focus on the Personal InfoCloud. This is the foundation for information architecture, addressable information that can be easily found. But, in our world of information floods and information pollution due to there being too much information to sort through, findability of information is important as refindability (this is rarely addressed). But, along with refindability is the means to aggregate the information in interfaces that make sense of the information, data, and media so to provide clarity and simplicity of understanding.

Europe Thing Again

Another perspective of the two conferences was they were both in Europe. This is not a trivial variable. At XTech there were a few other Americans, but at Microlearning I was the only one from the United States and there were a couple Canadians. This European approach to understanding and building is slightly different from the approach in the USA. In the USA there is a lot of building and then learning and understanding, where as in Europe there seems to be much more effort in understanding and then building. The results are somewhat different and the professional nature of European products out of the gate where things work is different than in the USA. This was really apparent with System One, which is an incredible product. System One has all the web 2.0 buzzwords under the hood, but they focus on a simple to use tool that pulls together the best of the new components, but only where it makes sense to create a simple tool that addresses complex problems.

Culture of Understanding Complex to Make Simple

It seems the European approach is to understand and embrace the complex and make it simple through deep understanding of how things are built. It is very similar to Instructables as a culture. The approach in the USA seems to include the tools, but have lacked the understanding of the underlying components and in turn have left out elements that really embrace simplicity. Google is a perfect example of this approach. They talk simplicity, but nearly every tool is missing elements that make it fully usable (calendar not having sync, not being able to only have one or two Google tools on rather than everything on). This simplicity is well understood by the designers and they have wonderful solutions to the problems, but the corporate culture of churning things out gets in the way.

Breaking It Down for Use and Reuse

Information in simple forms that can be aggregated and viewed as people need in their lives is essential to us moving forward and taking the pain out of technology that most regular people experience on a daily basis. It is our jobs to understand the underlying complexity, create simple usable and reusable structures for that data and information, and allow simple solutions that are robust to be built around that simplicity.



May 23, 2006

More XTech 2006

I have had a little time to sit back and think about XTech I am quite impressed with the conference. The caliber of presenter and the quality of their presentations was some of the best of any I have been to in a while. The presentations got beneath the surface level of the subjects and provided insight that I had not run across elsewhere.

The conference focus on browser, open data (XML), and high level presentations was a great mix. There was much cross-over in the presentations and once I got the hang that this was not a conference of stuff I already knew (or presented at a level that is more introductory), but things I wanted to dig deeper into. I began to realize late into the conference (or after in many cases) that the people presenting were people whose writting and contributions I had followed regularly when I was doing deep development (not managing web development) of web applications. I changed my focus last Fall to get back to developing innovative applications, working on projects that are built around open data, and that filled some of the many gaps in the Personal InfoCloud (I also left to write, but that did get side tracked).

As I mentioned before, XTech had the right amount of geek mindset in the presentations. The one that really brought this to the forefront of my mind was on XForms, an Alternative to Ajax by Erik Bruchez. It focussed on using XForms as a means to interact with structured data with Ajax.

Once it dawned on me that this conference was rather killer and I sould be paying attention to the content and not just those in the floating island of friends the event was nearly two-thirds the way through. This huge mistake on my part was the busy nature of things that lead up to XTech, as well as not getting there a day or two earlier to adjust to the time, and attend the pre-conference sessions and tutorials on Ajax.

I was thrilled ot see the Platial presentation and meet the makers of the service. When I went to attend Simon Willison's presentation rather than attending the GeoRSS session, I realized there was much good content at XTech and it is now one on my must attend list.

As the conference was progressing I was thinking of all of the people that would have really benefitted and enjoyed XTech as well. A conference about open data and systems to build applications with that meet real people's needs is essential for most developers working out on the live web these days.

If XTech sounded good this year in Amsterdam, you may want to note that it will be in Paris next year.



January 21, 2006

Changing the Flow of the Web and Beyond

In the past few days of being wrapped up in moving this site to a new host and client work, I have come across a couple items that have similar DNA, which also relate to my most recent post on the Come to Me Web over at the Personal InfoCloud.

Sites to Flows

The first item to bring to light is a wonderful presentation, From Sites to Flows: Designing for the Porous Web (3MB PDF), by Even Westvang. The presentation walks through the various activities we do as personal content creators on the web. Part of this fantastic presentation is its focus on microcontent (the granular content objects) and its relevance to context. Personal publishing is more than publishing on the web, it is publishing to content streams, or "flows" as Even states it. These flows of microcontent have been used less in web browsers as their first use, but consumed in syndicated feeds (RDF, RSS/Atom, Trackback, etc.). Even moves to talking about Underskog, a local calendaring portal for Oslo, Norway.

The Publish/Subscribe Decade

Salim Ismail has a post about The Evolution of the Internet, in which he states we are in the Publish/Subscribe Decade. In his explanation Salim writes:

The web has been phenomonally successful and the amount of information available on it is overwhelming. However, (as Bill rightly points out), that information is largely passive - you must look it up with a browser. Clearly the next step in that evolution is for the information to become active and tell you when something happens.

It is this being overwhelmed with information that has been of interest to me for a while. We (the web development community) have built mechanisms for filtering this information. There are many approaches to this filtering, but one of them is the subscription and alert method.

The Come to Me Web

It is almost as if I had written Come to Me Web as a response or extension of what Even and Salim are discussing (the post had been in the works for many weeks and is an longer explanation of a focus I started putting into my presentations in June. This come to me web is something very few are doing and/or doing well in our design and development practices beyond personal content sites (even there it really needs a lot of help in many cases). Focussing on the microcontent chunks (or granular content objects in my personal phraseology) we can not only provide the means for others to best consume our information we are providing, but also aggregate it and provide people with better understanding of the world around them. More importantly we provide the means to best use and reuse the information in people's lives.

Important in this flow of information is to keep the source and identity of the source. Having the ability to get back to the origination point of the content is essential to get more information, original context, and updates. Understanding the identity of the content provider will also help us understand perspective and shadings in the microcontent they have provided.



June 2, 2005

Replacement RSS and XML Button

Mike just posted a killer international and language-free RSS logo button on his site. I really like it. Mainly is works for those of use who understand the RSS text version, but for those who are not as technically forward or in non-English/Western languages this could still work. The RSS and XML text on the buttons always need explanation to those not familiar with the terms. The end of many of the tutorials is often, "just click it, you do not really need to know what it means, just click". Something tells me Mike is on to something profound yet so wonderfully simple.



November 29, 2004

Removing the Stench from Mobile Information

Standing in Amsterdam in front of the Dam, I was taking in the remnants of a memorial to Theodore van Gogh (including poetry to Theo). While absorbing what was in front of me, I had a couple people ask me what the flowers and sayings were about. I roughly explained the street murder of Theo van Gogh.

While I was at the Design Engaged conference listening to presentations about mobile information and location-based information I thought a lot about the moment at the Dam. I thought about adding information to the Dam in an electronic means. If one were standing at the Dam you could get a history of the Dam placed by the City of Amsterdam or a historical society. You could get a timeline of memorials and major events at the Dam. You could also get every human annotation.

Would we want every annotation? That question kept running reoccurring and still does. How would one dig through all the digital markings? The scent of information could become the "stench of information" very quickly. Would all messages even be friendly, would they contain viruses? Locations would need their own Google search to find the relevant pieces of information. This would all be done on a mobile phone, those lovely creatures with their still developing processors.

As we move to a world where we can access information by location and in some cases access the information by short range radio signals or touching our devices there needs to be an easy to accept these messages. The messaging needs some predictive understanding on our mobiles or some preparsing of content and messaging done remotely (more on remote access farther down).

If was are going to have some patterning tools built in our mobiles what information would they need to base predictions? It seems the pieces that could make it work are based on trust, value, context, where, time, action, and message pattern. Some of this predictive nature will need some processing power on the mobile or a connection to a service that can provide the muscle to predict based on the following metadata assets of the message.

Trust is based on who left the message and whether you know this person or not. If the person is known do you trust them? This could need an ensured name identification, which could be mobile number, their tagging name crossed with some sort of key that proves the identity, or some combination of known and secure metadata items. It would also be good to have a means to identify the contributor as the (or an) official maintainer of the location (a museum curator annotating galleries in a large museum is one instance). Some trusted social tool could do some predicting of the person's worthiness to us also. The social tools would have to be better than most of today's variants of social networking tools as they do not have the capability for us to have a close friend, but not really like or trust their circle(s) of friends. It would be a good first pass to go through our own list of trusted people and accept a message left by any one of these people. Based on our liking or disliking of the message a rating would be associated with this person to be used over time.

Value is a measure of the worthiness of the information, normally based on the source of the message. Should the person who left the message have a high ranking of content value it could be predicted that the message before us is of high value. If these are message that have been reviews of restaurants and we have liked RacerX previous reviews we found in five other cities and they just gave the restaurant we are in front of a solid review that meets our interests. Does RacerX have all the same interests?

Context is a difficult predictive pattern as there are many contextual elements such as mood, weather, what the information relates to (restaurant reviews, movie reviews, tour recommendations, etc.). Can we set our mood and the weather when predicting our interest in a message. Is our mood always the same in certain locations?

Where we are is more important than location. Yes, do we know where we are? Are we lost? Are we comfortable where we are? These are important questions that may help be a predictor that are somewhat based on our location. Or location is the physical space we occupy, but how we feel about that spot or what is around us at that spot may trigger our desire to not accept a location-based message. Some of us feel very comfortable and grounded in any Chinatown anywhere around the globe and we seek them out in any new city. Knowing that we are in or bordering on a red-light district may trigger a predictive nature that would turn off all location-based messages. Again these are all personal to us and our preferences. Do our preferences stay constant over time?

Time has two variables on two planes. The first plane is our own time variables while the other relates to the time of the messages. One variable is the current moment and the other is historical time series. The current moment may be important to us if it is early morning and we enjoy exploring in the early morning and want to receive information that will augment our explorative nature. Current messages may be more important than historical messages to us. The other variable of historical time and how we treat the past. Some of us want all of our information to be of equal value, while others will want the most current decisions to have a stronger weight so that new events can keep information flowing that is most attune to our current interests and desires. We may have received a virus from one of our recent messages and want to change our patterns of acceptance to reflect our new cautionary nature. We may want to limit how far back we want to read messages.

Action is a very important variable to follow when the possibility of malicious code can damage our mobile or the information we have stored in the mobile or associated with that mobile. Is the item we are about to receive trigger some action on our device or is is a static docile message. Do we want to load active messages into a sandbox on our mobile so the could not infect anything else? Or, do we want to accept the active messages if they meet certain other criteria.

Lastly, message pattern involved the actual content of the message and would predict if we would want to read the information if it is identical or similar to other messages, think attention.xml. If the Dam has 350 messages similar to "I am standing at the Dam" I think we may want to limit that to ones that meet some other criteria or to just one, if we had the option. Do we have predictors that are based on the language patterns in messages? Does our circle of trusted message writers always have the same spellings for certain wordz?

All of these variables could lead to a tight predictive pattern that eases the information that we access. The big question is how is all of this built into a predictive system that works for us the moment we get our mobile device and start using the predictive services? Do we have a questionnaire we fill out that creates our initial settings? Will new phones have ranking buttons for messages and calls (nice to rank calls we received so that our mobile would put certain calls directly into voice mail) so it is an easier interface to set our preferences and patterns.

Getting back to remote access to location-based information seems, for me, to provide some excellent benefits. There are two benefits I see related to setting our predictive patterns. The first is remote access to information could be done through a more interactive device than our mobile. Reading and ranking information from a desktop on a network or a laptop on WiFi could allow us to get through more information more quickly. The second benefit is helping us plan and learn from the location-based information prior to our going to that location so we could absorb the surroundings, like a museum or important architecture, with minimal local interaction with the information. Just think if we could have had our predictive service parse through 350 messages that are located at the Dam and we previews the messages remotely and flagged four that could have interest to us while we are standing at the Dam. That could be the sweet smell of information.



October 8, 2004

Web 2.0: Source, Container, Presentation

At Web 2.0 Jeff Bezos, of Amazon stated, "Web 2.0 is different. It's about AWS (Amazon Web Services). It's not on the web site for users to see. It's about making the internet useful for computers.". This is very appropriate today as it breaks the information model into at least three pieces: source, container, and presentation. Web 1.0 often had these three elements in one place, which really made it difficult to reuse the information, but even use it at times.

The source is the raw information or content from the creator or main distributor. The container is the means of transporting the information or content. The container can be XML, CSV, text, XHTML, etc. The presentation is what is used to make the information or content human consumable. The presentation can be HTML with CSS, Flash, PDF, feed reader, mobile application, desktop application, etc.

The importance of the three components is they most valuable when they stand alone. Many problems and frustrations for people trying to get information and reuse it off the web has been there has not been a separation of the components. Take most Flash files, which tie the container and the presentation in one object that is proprietary and can be extremely difficult to extract the information for reuse. The same also applies to PDF files as they too are less than optimal for sharing information for anything other than reading, if the PDF can be read on the device. As mobile use of the internet increases the separation is much more valuable. The separation has always been the smart thing to do.

Today Google launched a beta of their Google SMS for mobile devices. The service takes advantage of the Google web services (source) and allows mobile users to send a text message with a query (asking "pizza" and providing the zip code) and Google responds with a text message with information (local pizzerias with their address and phone numbers). The other day Tantek demonstrated Semantic XHTML as an API, which provides openly accessible information that is aggregated and reused with a new presentation layer, Flash.

More will follow on this topic at some point in the not too distant future, once I get sleep.



October 3, 2004

Feed On This

The "My" portal hype died for all but a few central "MyX" portals, like my.yahoo. Two to three years ago "My" was hot and everybody and their brother spent a ton of money building a personal portal to their site. Many newspapers had their own news portals, such as the my.washingtonpost.com and others. Building this personalization was expensive and there were very few takers. Companies fell down this same rabbit hole offering a personalized view to their sites and so some degree this made sense and to a for a few companies this works well for their paying customers. Many large organizations have moved in this direction with their corporate intranets, which does work rather well.

Where Do Personalization Portals Work Well

The places where personalization works points where information aggregation makes sense. The my.yahoo's work because it is the one place for a person to do their one-stop information aggregation. People that use personalized portals often have one for work and one for Personal life. People using personalized portals are used because they provide one place to look for information they need.

The corporate Intranet one place having one centralized portal works well. These interfaces to a centralized resource that has information each of the people wants according to their needs and desires can be found to be very helpful. Having more than one portal often leads to quick failure as their is no centralized point that is easy to work from to get to what is desired. The user uses these tools as part of their Personal InfoCloud, which has information aggregated as they need it and it is categorized and labeled in a manner that is easiest for them to understand (some organizations use portals as a means of enculturation the users to the common vocabulary that is desired for use in the organization - this top-down approach can work over time, but also leads to users not finding what they need). People in organizations often want information about the organization's changes, employee information, calendars, discussion areas, etc. to be easily found.

Think of personalized portals as very large umbrellas. If you can think of logical umbrellas above your organization then you probably are in the wrong place to build a personalized portal and your time and effort will be far better spent providing information in a format that can be easily used in a portal or information aggregator. Sites like the Washington Post's personalized portal did not last because of the cost's to keep the software running and the relatively small group of users that wanted or used that service. Was the Post wrong to move in this direction? No, not at the time, but now that there is an abundance of lesson's learned in this area it would be extremely foolish to move in this direction.

You ask about Amazon? Amazon does an incredible job at providing personalization, but like your local stores that is part of their customer service. In San Francisco I used to frequent a video store near my house on Arguello. I loved that neighborhood video store because the owner knew me and my preferences and off the top of his head he remembered what I had rented and what would be a great suggestion for me. The store was still set up for me to use just like it was for those that were not regulars, but he provided a wonderful service for me, which kept me from going to the large chains that recorded everything about me, but offered no service that helped me enjoy their offerings. Amazon does a similar thing and it does it behind the scenes as part of what it does.

How does Amazon differ from a personalized portal? Aggregation of the information. A personalized portal aggregates what you want and that is its main purpose. Amazon allows its information to be aggregated using its API. Amazon's goal is to help you buy from them. A personalized portal has as its goal to provide one-stop information access. Yes, my.yahoo does have advertising, but its goal is to aggregate information in an interface helps the users find out the information they want easily.

Should government agencies provide personalized portals? It makes the most sense to provide this at the government-wide level. Similar to First.gov a portal that allows tracking of government info would be very helpful. Why not the agency level? Cost and effort! If you believe in government running efficiently it makes sense to centralize a service such as a personalized portal. The U.S. Federal Government has very strong restriction on privacy, which greatly limits the login for a personalized service. The U.S. Government's e-gov initiatives could be other places to provide these services as their is information aggregation at these points also. The downside is having many login names and password to remember to get to the various aggregation points, which is one of the large downfalls of the MyX players of the past few years.

What Should We Provide

The best solution for many is to provide information that can be aggregated. The centralized personalized portals have been moving toward allowing the inclusion of any syndicated information feed. Yahoo has been moving in this direction for some time and in its new beta version of my.yahoo that was released in the past week it allows the users to select the feeds they would like in their portal, even from non-Yahoo resources. In the new my.yahoo any information that has a feed can be pulled into that information aggregator. Many of us have been doing this for some time with RSS Feeds and it has greatly changed the way we consume information, but making information consumption fore efficient.

There are at least three layers in this syndication model. The first is the information syndication layer, where information (or its abstraction and related metadata) are put into a feed. These feeds can then be aggregated with other feeds (similar to what del.icio.us provides (del.icio.us also provides a social software and sharing tool that can be helpful to share out personal tagged information and aggregations based on this bottom-up categorization (folksonomy). The next layer is the information aggregator or personalized portals, which is where people consume the information and choose whether they want to follow the links in the syndication to get more information.

There is little need to provide another personalized portal, but there is great need for information syndication. Just as people have learned with internet search, the information has to be structured properly. The model of information consumption relies on the information being found. Today information is often found through search and information aggregators and these trends seem to be the foundation of information use of tomorrow.



August 4, 2004

You Down with Folksonomy?

Gene supplies a good overview of Folksonomy, which is the bottom-up social classification that takes place on Flickr, del.icio.us, etc. It would be great to have a tool that could help organizations develop a folksonomy over time. Gmail from Google could develop a great folksonomy that could be overlaid on ones own searches.

Marry this idea with Paul Ford's "Google beats Amazon and eBay at Semantic Web" and you have a wonderful jump on the Semantic Web that is personalized. Take Gene's idea of building a thesaurus or crosswalk of terms within and across systems and things can really take off. There would need to be contextual tools added to handle the multiple definitions like Macintosh is a synonym for Mac, but Macintosh is an artist and a computer, while Mac is a computer, artist, and British term for raincoat (short for Macintosh). Hence the Semantic Web adds context to get these things straight.



July 17, 2004

Now Delicious

Time has been very thin of late. In the past six months or so started noticing an increasing number of links from del.icio.us and started pulling the feeds of some folks I like to follow their reading list into my site feed aggregator. I had about four or five del.icio.us feeds in my aggregator (meta aggregation of other's meta aggregations - MetaAg MetaAg). This past week I was taking medicine that tweaked by sleep patterns so I had some free awake time after midnight and I finally set up my own vanderwal del.icio.us feed.

I like having the ability to pull [meta] tags aggregations that others have used, like security, which is a great help during the day at work. I can also track some topics I keep finding myself at the periphery and ever more interested in as they tie to some personal projects.

I did consider something similar with Feedster, but it was down for updating recently when I had the tiny bit of time to fiddle with setting something up. By the way, Feedster is now Standards-based (not fully valid, but rather close) and it loads very quickly (most of the time).



July 9, 2004

Tantek Mulls Contact Info Updating

Tantek mulls a means to keep contact info upto date. This should be much easier than Tantek has made out. This could be as easy as publishing one's own vcard that is pointed to with RSS. When the vcard changes the RSS feed notifies the contact info repositories and they grab the vcard and update the repository's content. This is essentially pulling content information into the user's Personal InfoCloud. (Contact info updating and applications are a favorite subject of mine to mull over.)

Why vcard? It is a standard sharing structure that all contact information applications (repositories understand). Most of us have more than one contact repository: Outlook at work; Lotus Organizer on the workstation at home; Apple Address Book and Entourage on the laptop; Palm on the Cellphone PDA; and Addresses in iPod. All of these applications should synch and perfectly update each other (deleting and updating when needed), but they do not. Keeping vcard field names and order constant should permit the info to have corrective properties. The vCard RDF W3C specifications seem to layout existing standards that should be adopted for a centralized endeavor.

What not Plaxo? Plaxo is limited to applications I do not run everywhere (for their download version) and its Web version is impractical as when I need contact information I am most often not in front of a terminal, I am using a Treo or pulling the information out of my iPod.

While Tantek's solution is good and somewhat usable it is not universal as a vCard RDF would be with an application that pinged the XML file to check for an update daily or every few days.



February 14, 2004

Rael on Tech

Tech Review interviews Rael about rising tech trends and discusses alpha geeks. This interview touches on RSS, mobile devices, social networks, and much more.



December 30, 2003

CSS in RSS

Something to come back to, CSS in RSS. It works in NetNewsWire.



May 29, 2003

RSS offers Web content without a connection

I am happy that RSS changed by Web reading habits as the past to nights with limited Internet access I was able to crack open NetNewsWire and caught up on some reading from the 93 feeds I currently pull in. I have always been curious why some folks would post only titles of articles in RSS or RDF feeds and not even a short summary or teaser. Now this really frustrates me as I only could read titles, some which were intriguing and could have warranted me putting them in a "to read" status, but instead I ignored them and jumped to items that offered summaries or full content.



December 11, 2002

RSS feeds are very Clue Train friendly it seems

Not long after I posted my RSS disconnecting the creator and the user comments it started sinking in that it really does not matter. We it does to some part, but from a user's perspective the RSS allows a quicker more efficient method of scanning for information they have an interest in and easily see from one interface when new content has been written. I use other's blogs and digests to find information to post for my own reflection and to use as jumping boards to new ideas.

Yes, the interaction between creator and user is important, but it is not as important as getting informtion out. I began thinking that the whining about the lack of interaction on my part was rather selfish and very contrary to the focus I have for most information, which is having the abiltiy to access, digest, add to, or reformulate the information into another medium or presentation that will offer possibly better understanding.

I was self-taught in the values of the Clue Train so when I heard about it for the first time I was supprised so some large degree that the manifesto had resonance and turned on a light for many people, for myself and some others, I guess we drank the cool-aide early, as we thought this was the way things were or should be from the beginning of electronic information and a truely open community where information flows freely. Yes, the RSS/RDF/XML feed is a freer flow of information and puts the choice of the information consuption in the user's hands.



December 9, 2002

RSS and interconnections

Since I added the vanderwal.net RSS feed I have been picking up other RSS and RDF feeds. I have been using Ranchero's NetNewsWire Lite to pull many feeds of sites I read on a regular basis. I have become a convert to RSS/RDF extracts. They are a time saver for seeing only updated sites. I have read feeds of many of the news sites from MacReporter for quite sometime, but having personal content and blogs pulled in is quite a timesaver and allows me to get through more information.

I do see a downside of the XML feeds, in the disconnection of the creator from the users. The Web has given us the ability to have digital ghosts that we know come to our sites and possibly read content. This is much like Plato's cave shadow people, in that we do not see the actual people that come to the sites, but we surmise what these visitors are like and what they come to read. Occasionally we receive comments on the site, e-mails from visitors, or best meet folks in person that read/experience your work. It is very much a disconnected work that is built from guesses, for those that try and care (some just build for themselves resources to be used remotely and all others are welcome "free riders", like here). The XML feeds seem to take away another level of the "interaction" between the creator and the users. This relationship is important in communication as the feedback helps shape the message as well as offer paths for both parties to learn and grow.

The XML feeds offer the consumers of the information easier and more efficient means of getting, filtering, and digesting information, but the return path to the creator is diminished. The feeds are a consumer oriented communication channel and not so much an interactive communiction channel. The down side is a lack of true interactive communication, which becomes more of a consuming produced products, much like frozen dinners that get popped in the microwave. The interaction provides the creator with an understanding of how the user consumes the information and what the consumer of the information is finding usable and how the consumer is being drawn to the information. When one cooks their own meals or is being cooked for the meal can be spiced and seasoned appropriately for consumption. The presentation of the food can be modified to enhance pleasure. The live cooking process allows for feedback and modification. Much like the interaction of information in a communication scenario the creator and the consumer have a relationship, as the creator finds the structure and the preferred means of consuming the information the presentation and structure of the information can be altered appropriately.

In a sense the XML feed could be seen as one type of information structure of presentation. There are other options available that can be used to bring back the interaction between the creator and consumer. Relationships and connections are built over this expansive medium of the Web through information and experience. These connections should be respected and provided a place to survive.



November 28, 2002

W3C RDF Primer

The W3C RDF Primer is something to come back to soon. The Resource Description Framework is a solid foundation to sharing information and is getting used more. It is a grown-up's version of RSS (the weblogger's resource sharing XML tool). This information relies on well structured information and helps keep the information structured for reuse.

Previous Month

This work is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License.