Sunday, December 30, 2007

favorite programming books!

...below is a snapshot of some of these I am reading now, some I have read more than once... continue to find all of them very useful.

Just finished an awesome Ruby class at the University of Washington taught by Ryan 'Zenspider' Davis, which was phenomenal, learned a ton. Starting the 3 month Rails class next week at UW... as you can see from the books, trying to surround myself with as much Ruby and Rails information as possible, while also studying the finer points of web application development with RESTful web development, RegEx techniques, high performance techniques from Steve Souders' book (Souders was the YSlow creator while he was the Chief Performance Yahoo!), etc , and as always, lots of Ajax. There are a few books left out of this pic, as they are at work, but they would be the "Javascript: The Definitive Guide", and "Prototype and Scriptaculous In Action".

  • Crane, et al, Ajax in Practice
  • Porteneuve, Prototype and You Never Knew JavaScript Could do This!
  • Friedl, Mastering Regular Expressions
  • Souders, High Performance Web Sites
  • Hunt, Thomas, The Pragmatic Programmer
  • Cederholm, Bulletproof Web Design
  • Resig, Pro JavaScript Techniques
  • Thomas, DHH, Agile Web Development with Rails
  • Thomas, Programming Ruby (aka PixAxe book)
  • Berube, Practical Ruby Gems
  • Crane, Prototype and Scriptaculous In Action
  • Flanagan, Javascript: The Definitive Guide

Friday, December 28, 2007

Erlyweb, Erlang, Comet

Been doing a cursory look at Erlang and Erlyweb for a few mos off and on, keeping track of it via a variety of RSS feeds, articles in the community, toying around with it, etc.

Have come to the interim conclusion at present that I don't think Erlang-based Erlyweb will ever be a replacement for Rails, or Django, etc. Building MVC web apps quickly and robustly for the large chunk of market share will be an area where Java, Rails, PHP, ASP, Django et al dominate. Then most will take the steps to scale up from there if need be.

But it will definitely fill the gap for creating highly available web sites or services. Especially in the realm of Comet (or server side 'push' instead of polling) apps, where data gets instantly pushed to clients when the data is available on the server. Reason being is that Comet requires long lasting connections between the client and server to lazily push out data as that data becomes available… so having 80,000 connections with Erlang is not a big deal. It would be a very big deal with Apache. The other is database connection pooling and fast access to db's

...the Erlyweb/ Erlang sweet spot seems to me to be where multiple connections demonstrate a big performance boost, or boost in ease of implementation. So Comet is a good example b/c you benefit from all those connections. Same with Twitter, or Amazon’s SimpleDB. Also the database pooling examples given in Yariv's presentation is very intriguing. All of those make sense, or where background concurrency gives you a needed architectural boost. But many classes of less complex apps, a large percentage of web applications in fact, you can write those in a shorter time, for E.g., with Rails TDD, and it has no need for the concurrency on the backend that Erlyweb provides via Erlang...

Erlyweb and Erlang seems to be possibly an up and coming 'right tool for the job' when it comes to high availability real time data pushing. Something to keep an eye on.

Here are some links to whet the Erlang appetite of anyone interested... also a couple Comet links:

Saturday, December 15, 2007

RDF and the Semantic Web

Nicely written introductory link on RDF and the semantic web if you're curious. To give this some background and context... imo, this is why Google is coming out with it's "Knowls", etc, that was recently reported in the media.

a quick summary of RDF: Within in the flavor of XML that is RDF, people/companies creating web pages embed loosely organized 'sentences' of how the data on their pages is structured -- such as [what this page represents] [is a] [noun], or [actor] [starred_in] [movie] --- or more specifically ...whereby the URI "" could be used by anyone working in RDF to represent the concept of a dolphin.

...Wikipedia is currently way ahead of any site in terms of static informational pages, or references that are the answer to the "is a" question, for example. What do I mean by that/ why are these important? While databases structure data in a hierarchical, contained, somewhat inflexible way -- in the sense that this format of contained information therefore takes 'work' to connect that data to other data E.g. a webservice, an api wrapped around the database is required to allow others to access it etc. The Tim Berners Lee world of REST and the Semantic web theorizes ways to change that for the better in many ways, where pages themselves are related to the web in general via a common, standardized meta language.

So perhaps Google wants to catch up with Wiki (stands for "What I Know Is...") and extend its capability (dominance) in search in that different direction. I personally only care about what Google or Wikipedia is doing in the sense of how it will play out with the infrastructure of the web and how to anticipate what is ahead. REST, RDF, the Semantic web are interesting topics gaining momentum (which is one motivator for me in learning the Ruby web framework known as Rails, which embraces REST).

If the semantic web booms, which seems like a logical evolution, these Wiki or Knowls sorts of web properties find themselves at the core of finding information on the semantic web -- i.e. the semantic-noun-verb core. The standards for RDF and the semantic web are evolving, and it will be fun to see where/how this evolves over the next few years.

Some other links:
Redland RDF libraries: : (Ruby binding:
ActiveRDF (Rails):


Tuesday, October 02, 2007

'The Prize' and Facebook

The Prize, by Daniel Yergin is a great read. Before I explain further, the book is not about software at all. In a paragraph, I'll get to the software business parallels and ruminate about them. The book is about history, oil, economics, geopolitics. I’d highly recommend it for anyone interested in these topics. The author won a Pulitzer Prize, and is highly respected among many economists, including the esteemed longtime Chairman of the Federal Reserve, Alan Greenspan. Agree with Greenspan or not on some issues, Yergin has respectable admirers related to his work.

Here's the point of the blog post. In history, it’s amazing how fast situations can change. You don't have to look too far back as 2000 A.D. to witness that. I believe the internet economy is on much more solid ground now, not drawing any parallels there whatsoever. However, that is an example, clearly, how fast irrational exuberance can take hold and draw a whole economy into that way of thinking, let alone a smaller group of individuals.

Here is a summary of an excerpt in The Prize: Around 1865 when oil speculation was bubbling because of its discovery and the markets swelling around it (primarily lamp oil at that time), one farm in PA sold for $1.3 million because of the opportunity to mine oil. That’s a lot of money back then. That’s a lot of money in 2007. Less than a year later, the same plot of land sold for $2.0 million. …less than fifteen years later, after a recession and some fires, that same plot of land was sold…………………… for under $5. Five dollars.

Things can change quickly. Granted, less quickly if one is diversified. Also granted, the oil market has not stopped since.

But this post is related to the life of you as an individual -- your life, not the life of an industry. The lesson I read into that: if you get the opportunity, like Mark Cuban, take the money and run.

With this blog as evidence, I love writing software. I love the software business, open source, etc -- every facet of this 'life'. It inspires me daily. I do it because I love to do it, and to a degree I can understand an emotional attachment to a piece of software built and grown.

Purely from an economic perspective on the individuals involved, Facebook in my opinion should be at least, in part, sold. With all due respect to the Facebook titans, I sure hope that Mark Zuckerberg and the Facebook officers have someone giving them perspective in what must be very difficult decisions. But why not sell half for $5 billion and write a new piece of software on your own island? There are plenty of ways to change the world. $5 billion in your pocket is a good start, especially when you are in your 20's. There is just as much glory in sharing the successes with a value added partner or partners. In my mind, Facebook is taking a substantial risk by not selling a large minority percentage at the recently reported valuations.

Sunday, September 23, 2007

JuneBug Wiki

Came across JuneBug Wiki tonight while getting a wiki setup on my Mac for a couple projects I'm working on at home. I couldn't be happier thus far -- JuneBug Wiki is pretty slick.

First of all, it was as easy to install as it should be. If you already have the proper versions of Ruby, SQLite, and gems installed... it took about 1 minute to install JuneBug Wiki. The directions on the JuneBug Wiki site are super simple to understand. After installation, it took about another minute to configure it. The Wiki syntax is more straightforward for web developers as you can utilize most of what you are used to with html, and the non html syntax is relatively intuitive.

I used to use DokuWiki, and frankly always found it to be nice once it was installed, but somewhat annoying to install. Sorry, DokuWiki fanbois, but that was always my feeling. JuneBug wiki is nice, especially if you want a wiki written in Ruby. (JuneBug Wiki is written in Ruby on top of the "Camping" web microframework, not Rails)

Wednesday, September 19, 2007

UOOJ Development -- It's Huge! (pronounced "Yooooge")

Been doing a ton of JavaScript development over the past 3 years, more and more hardcore the last 18 mos, and loving every minute of it. This (obviously) involves Ajax development, along with other client-side development -- basically engineering User Interfaces in ways that make them responsive to users, fast, user friendly, lighten the load on server bandwidth required, and keeping code maintainable and extensible.

Throughout that time I have gravitated towards Unobtrusive Object Oriented JavaScript development. There is no better way to go in JavaScript development, as it presents a good challenge which results in a great payoff in terms of separation of concerns, code re-usability, keeping things DRY, among other things. Unobtrusive Object Oriented JavaScript is professional JavaScript development imo, and mainly involves creating re-usable client side JavaScript objects, with the key distinction being that you write your code while keeping JavaScript completely separated from HTML, also writing the web application so that it will work when JavaScript is turned on or off (defaulting to traditional Form submits unless encountering return false;, etc). You can do this through attaching events to DOM elements at runtime with Event observers, or after elements have been created on Ajax callbacks, for instance. I won't get too into detail, just want to spread the word to those interested in Ajax development, you should certainly look into some of the following links to help you get started:


Prototype related UOOJ links:

jQuery Events/ready:

some Scope articles:

... the side point of this blog post is to coin the acronym UOOJ! (pronounced "Yoooge"! or "huge" with the "h" being silent). While SMS-ing my good Agile buddy back in Cincinnati, Paul Spencer, (who I like to call the "Agile Guru") I was remarking about how much I am loving unobtrusive object oriented development and he replied "UOOJ!". So the credit goes to him -- great acronym. UOOJ Development!

Thursday, August 30, 2007

refactoring Ruby in TextMate

I'd like to hear from TextMate+Rails users: what tools do you use to Refactor your Ruby code inside TextMate?

Monday, August 27, 2007

anatomy of a memcached daemon

Before you start experimenting with memcached, here's a quick description of the command line options that will be used when starting it up. (note there is no configuration file for memcached).

It seems as if the most common command line options are used: -l -d -p -m -c

Here is what each of them do, per the memcached man pages:

Listen on ; default to INDRR_ANY. This is an important
option to consider as there is no other way to secure the
installation. Binding to an internal or firewalled network
interface is suggested.

-d Run memcached as a daemon.

Use MB memory max to use for object storage; the default
is 64 megabytes.

Use max simultaneous connections; the default is 1024.

Listen on TCP port , the default is port 11211.

Here is an example command line script to fire up memcached with some options:

$ memcached -d -m 128 -l -p 11211
This will start memcached as a daemon, allocate 128 MB memory max for storage of the memcached hashed query, listen on ip address, and listen on TCP port 11211.

So that's all there is to starting the daemon with a variety of options. Note there are more options in the man pages, but the above options are enough to give it a whirl. If you ever need to stop your memcached daemons, issue the following command:
$ killall memcached
Here is a couple helpful posts of using memcached with Rails:

installing MemCacheD on Mac OS X with MacPorts

installing MemCacheD on Mac OS X with MacPorts couldn't be easier:

1. as long as you have MacPorts (frmly DarwinPorts) installed on your Mac, just open up a bash shell and type the following:

$ sudo port install memcached
Because memcached utilizes libevent, MacPorts will check to see if libevent-dev is found... if it is not, it will fetch libevent-1.3d.tar.gz from, verify checksum, install... then likewise fetch memcached, verify checksum, extract, configure, install. This currently at the time of this post installs memcached 1.2.2_1.

Sunday, August 26, 2007

Rails stack on Mac OS X and Ubuntu 6.06

Been researching and practicing with Rails for a little while now, and went through the process of updating my setup on my Mac book pro (Mac OS X), as well as on an Ubuntu (Dapper, 6.06) test machine (actually a test VM).

Most of this is similar to what you would find inside of "Agile Web Development with Rails" by the esteemed DHH and Dave Thomas (by the way, if you're interested in Rails development, imho, it's the best book around by far -- very comprehensive and a smooth read). It is also similar to this post by James Duncan Davidson.

So why am I posting this here? Well because in order for me to get it to work on both my Mac and on Ubuntu 6.06, I had to tweak the instructions slightly for a variety of reasons (some version upgrades, etc).

Posting this on my blog not only to share, but for my own future reference. So here goes, for both Mac OS X and Ubuntu Dapper Drake 6.06, a line by line installs of a Rails stack: Apache 2.2, mySQL5 database server, SVN, Ruby 1.8.4, Ruby Gems, Ruby Termios Library, Mongrel, Mongrel Cluster, Capistrano 2.0.

For Mac OS X on Mac Book Pro (Intel-based Mac):

# first you need to download and install DarwinPorts (now known as MacPorts) 1.5.2:

# once you have MacPorts installed, you may commence with the Rails stack install...
$ sudo port install apache2
$ sudo port install mysql5 +server
$ sudo port install subversion +tools
$ sudo port install ruby
$ sudo port install rb-rubygems
$ sudo port install rb-termios
$ sudo gem install -y rake
$ sudo gem install -y rails
$ sudo gem install -y capistrano
$ sudo gem install -y mongrel
$ sudo gem install -y mongrel_cluster
For Ubuntu 6.06 VM on Mac Book Pro (Intel-based Mac):
$ sudo apt-get install apache2
$ sudo apt-get install mysql-server
$ sudo apt-get install openssl libssl-dev
$ sudo apt-get install libdb4.3 libdb4.3-dev db4.3-util libdb4.3++c2 libdb4.3++-dev
$ wget
$ sudo dpkg -i subversion_1.4.0-1_i386.deb # install ruby gems from source:
$ sudo apt-get install ruby
$ wget
$ tar xzvf rubygems-0.9.2.tgz
$ cd rubygems-0.9.2
$ sudo ruby setup.rb
$ sudo gem update --system

# now install 'build-essential' before installing gems:
# Compilers (and manual pages [optional])
$ sudo apt-get install build-essential manpages-dev
$ sudo apt-get install ruby1.8-dev

# now install the following RubyGems: Rake, Rails, Capistrano, Mongrel, Mongrel_cluster...:
$ sudo gem install --include-dependencies rake
$ sudo gem install --include-dependencies rails
$ sudo gem install --include-dependencies termios
$ sudo gem install --include-dependencies capistrano
$ sudo gem install --include-dependencies mongrel
# selected: 2. mongrel 1.01 (ruby), 1. fastthread 1.0 (ruby)
$ sudo gem install --include-dependencies mongrel_cluster
I will try and make a follow up post on Configuring Mongrel and Deploying with Capistrano 2.0, and connecting Apache to Mongrel, installing MemCacheD... but this at least gets the Rails stack installed with gems.

Hope this saves you some time in your Rails development-

Thursday, August 23, 2007


Check out memcached, if you're into researching ways of making your web apps speedy through clever database caching. (used by LiveJournal, Slashdot, Wikipedia, SourceForge). Very very cool.

By having your app first check the memcache instead of going to the database directly, you reduce the overhead that is inherent with ACID properties in relational database management system transactions. --as I understand it from reading about it on a few sites, by hashing database records in a cache reading from that hash where possible and updating the hash when that data is not available, you end up greatly reducing the demand on the database.

Makes me excited to think about the possibilities of an upcoming RESTful Rails app deployed on an clustered Ubuntu instance on EC2, utilizing memcached, that is Gears-enabled for offline capability as well... whoops, I wandered off...

Here's some info on memcached. I'm going to definitely check it out and try to implement:

memcached is a high-performance, distributed memory object caching system, generic in nature, but intended for use in speeding up dynamic web applications by alleviating database load.

Tuesday, August 21, 2007

domo arigato, Shiira

...if you drive a Mac, check out Shiira. Came across it during a Rails/Capistrano Peepcode by Geoff Grosenbach (which are all awesome, btw, if you're interested in Rails development. Peepcode is like a microwave oven for learning).

I had seen Geoff using Shiira in a few PeepCodes and decided to give it a whirl... it has some interesting features, and is SUPER FAST on my MacBook Pro. (that's a quantitative measurement ;)... but give it a try, doubt you'll disagree)

Shiira is based on on WebKit and written in Cocoa. (in case you haven't noticed, there is a lot of momentum with WebKit, given Safari, iPhone, Adobe AIR...)

Wednesday, August 01, 2007

podcast on Ajaxian : Joe Hewitt interview

This dude is cool. A Titan.

I highly recommend skating over to and queue-ing up the podcast

Hewitt …orig worked at Netscape since 2000 …… DHTML ever since… worked on Netscape 6 and 7… …then helped create FireFox (he says “we wanted to create a browser that didn’t suck”)… pretty much single-handedly created Firebug (which pretty much every Ajax dev uses nowadays)… created Parakey spinoff from Firefox with fellow FireFox dev Blake Ross…acquired by FaceBook this month. [It sounds like, from Joe, we will see the fruits of Parakey’s labors in the near future. As prolific as Joe Hewitt is, they have not yet released the product they have been working on for 2 years, and Facebook bought it. It's going to be fun to watch what happens there]

(…check out iUI…)

Hewitt sayz:

‘Apple has the lead in CSS (WebKit)’

‘Mozilla has the lead in JavaScript (Gecko)’

‘Python is my favorite language’

“between Mozilla and Microsoft, my grandchildren will write some great apps”

“Mozilla has a large legacy codebase… spending a lot of time refactoring code, instead of creating new features”

“I liked what Flock did… would have liked to see more of that with FireFox… maybe they’re right, because Flock is struggling”

iPhone: “I’m as cynical as it gets with mobile development. But the iPhone is going to be huge. Every other manufacturer will copy the form factor. Hopefully many will follow by using WebKit … Nokia uses WebKit …WebKit is amazing. Open Source. Small and Light. Nothing stopping anyone who doesn’t have political reasons for using WebKit”

This is pretty funny conversation from the podcast:

When asked ‘how come you are able to churn out so many apps, side projects like Firebug, iUI, in addition to parakey work, etc’:

Hewitt: “No no… I decided a few years ago I was going to do nothing else but write code”

[big laughter from Ben, Dion]

Hewitt: “I’m not even kidding… my girlfriend and I broke up, I have this routine where I--

Almaer [jokes]: “...the dog died…”

Hewitt: “hey you have to put food in the bowl, who has time for that?!”

Great stuff. And awesome information.

The Forever Inspirational, Howard Armstrong

Howard Armstrong ...a true genius, and incredibly driven man. earth mover. He was made, not by politics or marketing, but by his own authentic understanding of the science behind his inventions, and his effort to drive them to fruition. What an innovator.

Most of Armstrong's life was a phenomenal inspiration. But he was ultimately broken by David Sarnoff, his long time colleague. RCA's cutthroat tactics coming after 20 years of a futile legal battle with the boastful and baseless Lee DeForest.

Despite his greatness, Armstrong's demise was largely because he never learned to compromise, always seeking the absolute victory, on his own.

Sunday, July 22, 2007

Rails running on Amazon's Elastic Compute Cloud

This is the way I am possibly pointing towards for hosting of my Ajax apps developed in the [not-too-distant] future. At the very least, going to research this a good amount: running a Rails Virtual Machine (or multiple clustered VM's) on Amazon's EC2.

I am currently using AS3 not only for backups, but for cross-domain resource loading (static images and JavaScript files). The value presented by AWS's EC2 is too much to ignore when thinking about the future. It's nice to see some Rails plugins already spring up related to this, (even though EC2 is in limited Beta currently, as far as I know) and no doubt it will continue. It's going to be a fun couple of years and beyond!

Deploy Rails app on EC2 via Capistrano

The site explains that the new version of Capistrano (v2) broke functionality of his plugin, but it's a good reference to keep an eye on as this matures.

I am pointing/researching in this direction for my independent/ out-of-work apps and can only assume many others in industry and even companies might move this way as well, given the value it presents to independent devs, as well as even companies looking to host their app(s). Reason being is one can pretty readily set up redundancy (multiple VM's, clustered) for fault tolerance, and also, you only pay by the hour it's running, and last but not least, its infinitely scalable. [ Amazon's ECC approaches infinity, at least for practical purposes :) ]

So, for example, in the case of one site that runs 12 weeks per year, for hosting, I would effectively only pay for a maximum of 24 hours * 7 days/week * 12 weeks/year * $0.10 per instance hour = $201. per year (+ data transfer @ $0.10 per GB). Since the site doesn't transfer much data beyond static images, JavaScript files, generated html, this wouldn't be much more. Now I pay about $70 * 12 mos = $840 for hosting a variety of sites, which is about what one would pay if you ran Rails on EC2 (not including data) for a full year (~ $875). But, again, the scalable nature of EC2 is key.

Tuesday, July 17, 2007

JSONRequest.js -- from the genius of Doug Crockford

I wish there was more chatter lately about JSONRequest, and similar secure XSS proposals.

We can all think of legitimate reasons why as a developer we'd very much like to be able to request and return data from remote sites, and am EAGERLY awaiting the dust to settle on this issue. Because of the security model of XMLHttpRequest, this sort of data exchange is not possible due to the 'same origin policy', whereby the browser restricts a web page from communicating with a server of a different domain via an XMLHttpRequest (i.e. Ajax) call:

"XMLHttpRequest has a security model which is inadequate for supporting the next generation of web applications. JSONRequest is proposed as a new browser service that allows for two-way data exchange with any JSON data server without exposing users or organization to harm. It exchanges data between scripts on pages with JSON servers in the web. It is hoped that browser makers will build this feature into their products in order to enable the next advance in web application development."
IBM has a well written article discussing the issues, challenges, and proposals on the table with regard to secure cross-site scripting, and provides some insights to what we can currently implement, and what is on the horizon, including Doug Crockford's JSONRequest.js proposal:

"Here and now

A more recently developed content-retrieval technique employs communication between a page's script and a hidden iframe through its src URL's fragment identifier (the part of the URL that comes after the # sign). Scripts in the parent page and embedded iframe can set each other's fragment identifiers despite coming from different origins. An agreed-upon communication protocol is maintained between the scripts, driven by JavaScript timers that periodically fire routines to check for changes in the fragment identifier.

Because the scripts must know each other's addresses and they must collaborate between themselves to agree on a protocol, trust is ensured. Because any server interaction is local to each component and separate from the inter-script communication, cookies are not exposed.

While still imperfect (for example, it relies on an anomaly that is not a designed behavior, and polling for changes is inferior to having an event fire in response to a change), this solution comes closer to providing browser-native, secure, in-page, cross-domain communication than any other.

Note: James Burke, a developer at AOL Developer Network, pioneered the fragment identifier technique and has built it into the latest releases of the Dojo Toolkit JavaScript library."

When will any of these get implemented? I can hardly wait.

Doug Crockford proposals:

JSONRequest files:

Dojo notes related to JSONRequestResponse:

conversation by some of the 'Titans'...,1895,1960822,00.asp

Ajax Experience Conference -- plus past presentations link

Was fishing around the Ajax Experience website (wishing I could be there! getting married in less than a month, can't make it), and came across the following link to a wealth of information from the 2006 year -- conference presentations, videos, etc.

The Ajax Experience is one conference I plan to budget for in terms of dollars and vacation days in the 2008 year. Given the list of presenters and attendees, Dion Almaer and the Ajaxians seemingly put together a wealth of knowledge during this 3 day conference. This must be an intense 3 days. Many of the 'Titans of JavaScript, Ajax, and "Web 2.0" are there -- key JavaScript gurus like Brendan Eich, Douglas Crockford... along with library creators and JavaScript experts John Resig, Christopher Portenueve, Joe Walker et al. (and many others) Here is the link to the conference happening in 9 days. There is another conference in October too:

Sunday, July 15, 2007

Phenomenal Execution by Apple with iPhone

Apple executes when it comes to designing, engineering, and marketing their products. Better than anyone in the hardware world.

"The Most Successful Product Intro of the 21st Century"

"Apple's iPhone could emerge as the most successful product introduction of the 21st century, new research suggests." Conducted by Lightspeed Research, "the research findings are staggering," reports Jonny Evans (Macworld). "Nearly 90 percent" of the respondents had heard about iPhone, and 32% of those who didn't already own one intend to purchase one. In a separate survey, Lightspeed Research also learned that "nearly half of those who would like to own an iPhone stated that the benefits of having music, movie, internet and wireless all in one was the top reason."

Saturday, July 14, 2007

Amazon S3 and Ruby on Rails

From a couple perspectives, I find it tough to beat Amazon S3 for storage of many file assets (esp web developer related file assets). The value and ease of use is very good, especially with regards to file assets that you want to reach from multiple locations and also assets that outlast a machine's lifetime (say, > 3 years). I personally much prefer to store data on a network, where I can access it from anywhere, anytime, even programatically. Likewise, Amazon handles all redundancy, all backups, etc. As a longer term goal, I would like to automate an effective rsync of some data on my local machine to my S3 repository... open to ideas there if anyone wants to post them here or email me.

For those interested (and too lazy to clink on the link above!), here are the costs as of today:

Amazon Simple Storage Service



* $0.15 per GB-Month of storage used

Data Transfer

* $0.10 per GB - all data transfer in
* $0.18 per GB - first 10 TB / month data transfer out
* $0.16 per GB - next 40 TB / month data transfer out
* $0.13 per GB - data transfer out / month over 50 TB

Data transfer in and out refers to transfer into and out of Amazon S3.

Data transferred between Amazon S3 and Amazon EC2 is free of charge.

* $0.01 per 1,000 PUT or LIST requests
* $0.01 per 10,000 GET and all other requests*

* No charge for delete requests

Storage and bandwidth size includes all file overhead

Here is a great link related to using S3 programatically with Rails:

Building a Web Application with Ruby on Rails and Amazon S3

Wednesday, July 11, 2007

Prototype version 1.5.2_pre0

I notice on Backpack, that the 37signals guys are using a new version of Prototype.js. I haven't had time to check the diffs, but wonder what has been added:

Thursday, June 21, 2007

Compelling User Interface -- BumpTop 3D Desktop

This is an innovative and creative approach to organizing files and information intuitively. I enjoyed learning about this, and it made me think of some new possibilities with regards to UI implementations.

Wednesday, June 13, 2007

E4X -- ECMAScript for XML

It is perhaps not critical to understand now, but those who want to take a look at what is perhaps ahead, should consider checking out E4X, or ECMAScript for XML. It has already been implemented in SpiderMonkey and Rhino, and has been standardized in ECMA-357 standard.

E4X places XML objects as first class JavaScript objects -- just like objects, arrays, functions and Regular Expressions (which I for one, need to admittedly get better with).

This is particularly attractive -- from page 11 of the E4X slides:

* Expandos make markup composition a snap!
* Just start appending extra property tiers:

var html = <html/@lt;
html.head.title = "My Page Title";
html.body.@bgcolor = "#e4e4e4";
html.body.form.@name = "myform";
html.body.form.@action = "someurl.jss";
html.body.form.@method = "post";
html.body.form.@onclick = "return somejs();";
html.body.form.input[0] = "";
html.body.form.input[0].@name = "test";

* Results in this XML:

<title>My Page Title</title>
<body bgcolor="#e4e4e4">
<form name="myform" action="someurl.jss"
method="post" onclick="return somejs();">
<input name="test"></input>

...the future is surely bright for open source, browser native, Ajax.

Saturday, June 09, 2007

Best RegEx tool

This free, web-based RegEx tool is great! It allows you to experiment with RegEx's and immediately get feedback on what you're trying to accomplish. Great for writing JavaScript Regular Expressions:

Sunday, June 03, 2007

Google Gears - Performance Out of the Box

Google Gears was released last week and it's been hugely exciting to get a look at this api, along with some of the apps people have come up with already.

Google Gears API

Obviously many of us had been looking at Adobe Apollo for awhile, and Google Gears is of the same mindset -- but even more encouraging it was released under the very liberal BSD license, which only adds encouragement to those of us excited about the rapid advancement of technology. It will be exciting to see the rapid progression of this 'platform' for offline integration of web applications. The community of web developers has so many ideas, and Google's release of Gears to the community in this manner will no doubt generate many ideas that will find their way into the next release of Gears is my only guess.

As an Ajax developer where I am particularly interested in Gears is in terms of performance, over and above online/offline synchronization as a feature. While the two go hand in hand synchronization has a specific place in many apps (internet connection dropped, queue-ing work completed offline, etc), I feel like the performance aspect of Gears with this offline database and client side file cache will become huge. I am looking forward to seeing many engineers benchmark their web applications once they have become enabled with and optimized for Gears.

Developing with Google Gears in mind, could become like Ajax has become the last two to three years. What I mean when I say that is that there are significant performance benefits that can be gained by limiting the number of trips to a server (both web and database), and processing items in batch. We have already seen this with regards to Ajax applications sending bits of data to the server rather than whole page requests, and this takes it a step further. We now have a larger set of boundaries to work with as web developers. We can choose how much to update, when to update, along with a client side failsafe. Having the cross operating system desktop intermediary is the enabler here, and the deployment potential of a large player like Google are obvious.

As Google continues its collaboration with Mozilla Firefox (I can't wait for Classes in Javacript 2.0 by the way!), and open source libraries like Dojo, it will become easier to integrate these offline caching and synch-ing paradigms into our development practices.

All of this enables richer, more responsive user experiences for the end user. It enables exciting new architectures for web applications. The future is surely bright for the paradigm of web-based applications growing in performance, capability and reach.

VirtueDesktops for Mac + Dojo.sql ENCRYPT(?) screencast

This is wild. For those of us awaiting the release of "spaces" in Mac OS X Leopard, in the meantime you can download VirtueDesktops

I caught the use of this during a screencast by Brad Neuberg (, and have been playing around with it on my Mac today. You can see him use Virtue Desktops in this screencast where he efficiently moves to different desktops. IMO, it's a great way to efficiently organize my machine. Thanks for the link, Brad!

Incidentally, the screencast is related to encrypting data with Dojo Offline. If you're interested in that screencast it is here, it's pretty sweet:

On the Google Gears mailing list Brad mentions that enabling transparent encryption and decryption will be in the Dojo Offline port that runs on top of Google Gears in about 2 weeks. Looking forward to seeing that high level lib on top of Gears!

Thursday, May 31, 2007

Sunday, May 27, 2007

Using Firebug in IE, Opera, Safari

Most people know about Firebug, as a plugin to Firefox... but you can also debug javascript in other browsers with Firebug, by installing Firebug Lite.

It's awesome to include a .js file, open up Internet Exploder, and use alot of the Firebug goodness to check browser compatibility. (after test, or prior to loading to your production webserver, simply comment out that line of .js and you won't take a hit when it comes to page load times).

Here is some more Firebug documentation and references from their site:

Core Javascript & Ajax Development Links

...some handy references for all of us writing Javascript and Ajax apps (specifically with Prototype.js, but many of the links help writing and testing JS in general) Enjoy:

Core Javascript 1.5 Reference

Javascript Style Guide on Mozilla's Development Center:

Prototype Javascript Framework API docs: Effects Library tag docs:

Dan Webb's LowPro.js:

Joe Hewitt's Firebug plugin docs and download:


"Quality", by Douglas Crockford

Insightful presentation by Douglas Crockford on "quality". Engaging and worth watching:

Tamarin Project -- Adobe contributes to Mozilla

This is older news, but certainly worthy of repeating (now that I have a couple free hours to get my thoughts out).

If you're interested in where the future of client-side software development is going, definitely read this post and the related link from It discusses 'Tamarin', which is a collaborative project between Adobe and Mozilla -- Adobe contributed source code from the ActionScript Virtual Machine to the Mozilla Foundation.

What all of this means, as Brendan Eich states "now web developers have a high-performance, open source virtual machine for building and deploying interactive applications across both Adobe Flash Player and the Firefox web browser"

Especially insightful are the 'comments' section, where Brendan Eich (the individual who created the JavaScript language and one of the first people at Netscape, now the Chief Technology Officer at Mozilla, responsible for architecture and the technical direction of Mozilla) , explains the nature of performance gains that will come from a Just-In-Time JavaScript compiler.

All of this gives us insight into the web-centric paradigm that both Adobe and Mozilla (along with obviously Google and others) are pointed towards and that has huge momentum. This collaboration, along with items like Adobe Apollo (and other similar efforts by Sun, for instance), make it very motivating time to be a web application developer. We are at the beginning of a time pointing towards being able to create deeply rich web applications -- apps that run efficiently in a browser that is continually increasing in performance. These web applications can be programmed and run in a web-connected or disconnected mode.

The Ajax libraries and toolsets that have sprung up in the last few years (E.g. Prototype, Dojo, et al), along with the advancement of the browser itself, collaborations between Open Source and industry (Adobe, JVM and Flex), and many of the open source communities supporting this vision for the future of the web really gets me excited and motivated to wake up and write code, and dig deeper each day into the nuances of these technologies. Good times-

(ps, here is another good read:

Javascript Compressors

Javascript compression is used by professional web developers to compress JS, and decrease the latency of page loads. Which compression tool do you use? Do you use any that aren't on this list? I'm interested in hearing what other people are using.

Dojo ShrinkSafe: uses Rhino (Javascript engine written in Javascript) to compress without regular expressions

JSMin, The Javascript Minifier

The Javascript Compressor:

(another useful link, not related to compression, but a tool that is helpful in writing Javascript in addition to Firebug: JSLint --

Firebug, Joe Hewitt; (high rez screencast from YUI Theater)

...Firebug is an unbelievable Javascript tool -- for creating, experimenting, and debugging Javascript. Check out Joe Hewitt's presentation:

"Web 2.0 -- The Machine is Us"

Thursday, May 03, 2007

Coldfusion MX Developers Needed -- Send me your Resume

We are looking to hire a handful of Coldfusion MX Developers for a set of projects. These are full time positions here in Seattle, WA. Contact me by phone or email if you are interested. The company that I am working with has a great culture, very cutting edge using the latest CF7 and CF8 BETA, along with db technologies like mySQL, PostgreSQL, Oracle, some SQLServer.

We are looking for Coldfusion Developers of all skill levels, but basically people with a passion for development with CF. It's as good of an opportunity as exists in the NW in terms of competitive pay, great culture, great projects, great people, growing company. You'll have a chance to do some cutting edge CF development with a great group of people.

My contact info to send your resume:
cell: 425.345.6764

Wednesday, March 21, 2007

accessing Coldfusion Components (CFC's) question

I haven't had any problems accessing my components (CFC's) when I use dot notation on my localhost, nor have I ever had problems using dot notation when I have a path mapping set up on a remote server. However, this (below) does not work when calling out components in an environment where a mapping is disallowed (shared environment, for instance, where you don't have access to CF adminstrator)-- I'm trying to get my process created in a way that doesn't require me to use full paths, for ANT builds, etc that can be ported to different machines).

Any suggestions for how to access components in a directory such as, for example, wwwroot/ApplicationName/Model/Components

where I cannot have a mapping setup? Is there a way to do this with CFOBJECT (in a modified format from below), or do I need to resort to CFINVOKE? If so, what would the path callout look like using CFOBJECT or CFINVOKE?:

<cfobject name="oContactData" component="ApplicationName.Model.Components.CFCmyprofile ">
<cfset qContacts = #oContactData.getMyContacts(SESSION.auth.SkillshowUserID, -1)#>

Looking for a modified line of the above code that works in a relative path manner-

Tuesday, March 20, 2007

Apollo -- check it out

Apollo Alpha has been released. I'm excited to try it out:

Here is a nice look at some of what can be done for those interested:

Example presentation of Apollo (showcasing Ebay sample app)

this is an older demo from Dec 2006, shows Apollo with Amazon's api's, Google Maps mashup with client-side address Vcards, and some other items (like Flex):

One intriguing thing about Apollo is that you can use existing web tech to build apps that interact with and run on the client This demo shows an application built using Adobe's Flex technology on the front end (View), and using Ebay on the back end to deliver data to the client. But that is just one incarnation. Apps could also be built with html+css and a database, and run equally well on the Apollo cross-OS runtime.

The API for synchronizing data online/offline, and accessing the file system, is what holds a lot of the power for developers (imo). I'm sure you can find other benefits. Apollo's APIs simplify the process for handling this. There are clever ways of doing it now ( , etc), but they use Flash files, and other client side workarounds to get the job done, and are limited in the size, etc that you can save on the client (without changing preferences, etc).

Basically Apollo extends the reach of web dev tools (not having to write code in Java, C++, VB, etc, but instead using html, css, AJAX, Flash/Flex, etc.) to deliver applications that run on a user's machine as well as online. It enables web app developers with an extra set of API's along with a cross-OS runtime that they can use to extend the reach of their apps..... it has been developed from the paradigm of the 'mostly connected to the internet' that the world is moving towards.

Sunday, February 25, 2007

Coldfusion MX 7.02 on Mac Book Pro

Trying to get my dev environment up using CFMX on a Mac Book Pro and have followed Mark Andrachek's [awesome] instructions,, but encoutering a problem.

Symptom: after completing all steps, when I go to: http://localhost/CFIDE/Administrator/ I receive the following error message:

Internal Server Error
The server encountered an internal error or misconfiguration and was unable to complete your request.

Please contact the server administrator, [no address given] and inform them of the time the error occurred, and anything you might have done that may have caused the error.

More information about this error may be available in the server error log.

Apache/1.3.33 Server at mark-holtons-computer.local Port 80

- Apache (1.3) is running on my localhost (I know this because when I go to I see:

"If you can see this, it means that the installation of the Apache web server software on this system was successful. You may now add content to this directory and replace this page."

In step 1 of Mark's instructions, I had been able to verify the CF Administrator was up and running, but now I cannot access it via :

I'm thinking this has to do with the connector and perhaps I didn't configure something correctly there. I'm pretty sure I compiled the Apache HTTPD/JRun4 Connector right, as I've followed the directions to the letter:

"Internal Server Error The server encountered an internal error or misconfiguration and was unable to complete your request."

If I run a cat error_log from the shell, here is what the Apache error_log ("/private/var/log/httpd/error_log") states, line for line, when I make the request to http://localhost/cfide/administrator/ via the browser:

[Sun Feb 25 10:58:13 2007] [notice] jrApache[449:49825] could not initialize proxy for
[Sun Feb 25 10:58:13 2007] [notice] jrApache[449:49825] could not open "/Applications/JRun4/lib /wsconfig/1/": Permission denied
[Sun Feb 25 10:58:13 2007] [notice] jrApache[449:49825] initialized proxy for
[Sun Feb 25 10:58:13 2007] [notice] jrApache[449:49825] Couldn't initialize from remote server, JRun server(s) probably down.
[Sun Feb 25 10:58:13 2007] [notice] jrApache[449:49825] could not initialize proxy for
[Sun Feb 25 10:58:13 2007] [notice] jrApache[449:49825] could not open "/Applications/JRun4/lib
/wsconfig/1/": Permission denied
[Sun Feb 25 10:58:13 2007] [notice] jrApache[449:49825] initialized proxy for
[Sun Feb 25 10:58:13 2007] [notice] jrApache[449:49825] Couldn't initialize from remote server, JRun server(s) probably down.
[Sun Feb 25 10:58:13 2007] [notice] jrApache[449:49825] JRun will not accept request. Check JRun web server configuration and JRun mappings on JRun server.

That file,, is one I had to create for connection purposes and has permissions of:
-rwxr-xr-x 1 holtonma holtonma 28B Feb 24 22:50

...what properties does this file need to have?? (does this file need to have user/group of "admin admin" instead of "holton holton"?)....

Greatly appreciate any words of wisdom anyone can provide.

Monday, February 19, 2007

Solaris 10 Containers (Zones) and Coldfusion MX Licensing

With the Enterprise version of Coldfusion MX 7, a CF developer has the ability to deploy your applications as .EAR files on multiple instances of the JRun application server (or any other J2EE compatible application server, such as BEA WebLogic, IBM Websphere, etc, even open source Tomcat, although that isn't supported by Adobe) . This redundancy enables fault tolerance, which is especially useful in a shared hosting environment . That is, should one instance of the application server fail for 1 application, this would not crash all applications relying on the Coldfusion server. Instead, each application can utilize its own instance of the JRun (or other J2EE) application server.

If you are not familiar with this, the following is a good article, as is Ben Forta's Advanced Coldfusion development book (a veritable CF bible) is a phenomenal reference:

The J2EE implementation of Coldfusion discussed above exists for really a single shared server environment, as the LICENSE restricts .EAR deployment to a maximum of 2 CPU's. The marketing material is not immediately clear on this, so be aware, that J2EE deployment is unlimited in terms of application server instances, but is limited to TWO CPUs. If I am incorrect on this, someone please provide clarification, but I'm somewhat certain after days of looking into this, that this is the case.

There are still distinct advantages there -- not only the capability for application server redundancy (and clustering), but also the ability to deploy as .EAR files enables the development process to be one that contains a bundled and dated version. While its common and certainly best practice to utilize version control such as CVS and Subversion, it's not common in the Coldfusion community to bundle and deploy applications as .EAR files. This is evidenced by the options available at most Coldfusion hosting sites, which consists mainly of FTP-ing files to your folder. (it should be noted there are small disadvantages to .EAR deployment, if you require frequent small changes to files, but this disadvantage can be significantly nulled out with the use of Apache ANT as a build, zip, ftp, and deploy tool). .EAR file deployment has the distinct process advantage of being able to quickly snap back to dated versions of one's web application instantly. As long as there weren't schema changes to the database, a developer can simply deploy an .EAR file to the application's directory and it will effectively 'unzip' the application and all its dependencies. .EAR file deployment can also be made easy through the use of mature deployment tools such as Apache ANT.

That brings me to my question. As CFMX 7 enterprise enables isolation and redundancy at the application-server level, Solaris 10 has the capability for Containers (aka 'Zones') for isolation and redundancy at the server level itself. The following is a summarized description of Solaris 10 Container capabilities from Sun's website:

  • "Build customized, isolated containers—each with their own IP address, file system, users, and assigned resources—to safely and easily consolidate systems
  • Guarantee sufficient CPU and memory resource allocation to applications while retaining the ability to use idle resources as needed
  • Reserve and allocate a specific CPU or group of CPUs for the exclusive use of the container
  • Automatically recover from potentially catastrophic system problems by leveraging the combined functionality of Predictive Self Healing and Solaris Containers"
My question to Adobe and the Coldfusion MX Community:
How would the implementation of multiple Sun Solaris 10 Containers affect the licensing of Coldfusion MX 7? If a server had 2 CPU's but was configured for multiple Solaris Zones, would the Enterprise License still apply in such a way that enabled unlimited instances of the application server throughout these Zones? Or is each Zone treated separately as a CPU? It would seem to me, since there are two processors, the enterprise license would allow for this type of integration with Solaris 10 Zones, but I cannot find any documentation on this on Adobe's Coldfusion site or in the license. Can anyone at Adobe help clarify this?

Mark Holton

Coldfusion MX 7 and Sun Solaris 10 for x86 processors

Hopefully this bit of history is a good sign that it's just a matter of time before there is a supported Coldfusion MX 7 install for Sun Solaris 10 on an x86 processor. I know they support it with the SPARC processor, but why not with the newer x86 chips?

Macromedia/Adobe has supported it in the past, hopefully they will in the near future. Anybody know of any plans for this?


Macromedia is committed to supporting the Solaris OS to ensure our ColdFusion developers can continue to deliver mission-critical business applications. Solaris 10 will provide customers with the scalable, reliable platform they need to continue delivering rich Internet applications with the ease of use and productivity of ColdFusion. "

Jeff Whatcott
Vice President of Product Management

Saturday, February 17, 2007

Setting Up a Mac for Coldfusion Development

Matt Woodward has an extensive PDF containing all needed to get your Mac (even Mac Book Pro with Intel chips) installed with all needed for Coldfusion web development and much more. I'd highly recommend checking it out if you have a Mac Book Pro or iMac!

Thanks for taking the time to do that, Matt, this is great and something I (and I'm sure other CF Devs) have been looking for. Great resource!

Monday, February 12, 2007

Touch Screen Technology : What if this was in a Mac?

Came across this technology today and it made me think:

...what if this tech was integrated into, for instance, a Mac? It's not that far fetched, Apple already implemented touch screens with the iPhone, why not with laptops, etc, for a new paradigm in user interfaces for larger screen displays? I personally believe it'd open up a number of possibilities in software to navigate like this via touch screen in addition to our traditional mouse navigation. I think the world is ready for a new navigation paradigm, and this seems one that is next in that evolution.

"In this video, Jeff Han and Phil Davidson demonstrate how a multi-touch driven computer screen will change the way we work and play."

Sunday, February 11, 2007

Mickelson wins at Pebble Beach

Great victory for Phil Mickelson this week at Pebble Beach, tying the record at -20 and winning by 5 shots.

Couldn't happen to a classier guy. The following is a great story by former Chicago Tribune writer, Bob Verdi:

Monday, February 05, 2007

Install Error using Coldfusion MX 7 on Sun Solaris 10 x86

Anyone out there tried to install Enterprise CFMX 7 on Sun Solaris 10? It is a supported OS for Coldfusion 7, but we are encountering errors.

We get to the point where we copy the file (coldfusion-70-sol.bin) from the installation cd to our server, then go to that directory, the start the installation.

It starts running fine, "Launching installer...", but then we receive a:

Solaris/resource/jre/bin/java: Invalid Argument

We attempted to run the version of the JRE on the cd, and we receive the following:

jre/bin/java: cannot execute

Any help would be greatly appreciated.

Thursday, January 18, 2007

Roy Beck's celebrated demonstration of the population consequences of current U.S. immigration policies has entertained and shocked audiences across the country. This video is packed with the facts and analysis that make moral and practical sense of a complex and highly contentious issue.