Quick read but super informative. Anyone looking to setup shards and clusters using MongoDB should read this book. What is really nice is the author’s first hand knowledge of the system (she is a core contributor) and explaining priorities and milestones of future releases. That helped keep this book from failing behind a technology that is evolving quickly.
Archives for March 2011
Ran into an interesting Ruby 1.8.7 issue the other day. I had some UTF-8 data that needed to be escaped and returned as a query string in a url. The system was already using CGI.escape so I went with it. While it claimed to return a UTF-8 encoded string, running it through a parser showed something was not quite right.
Upon digging around the web, I found all kinds of work arounds using Iconv, but non did the trick for me. Ultimately I found a bug posted for Ruby 1.8 that had a similar description. The other recommendations had to do with moving to 1.9.2, which is not an option at this point for other reasons.
Ultimately I went to URI.escape and that did the trick. There are some other differences, but all minor in comparison to getting a properly UTF-8 encoded URL.
My rating: 4 of 5 stars
Good book with some great insights and game design theory as could be applied to the real world. Some of the author’s examples of games she has been involved in are used to illustrate the points she is making and can drag on. But stick with it there are great nuggets of information in those examples.
Last Wednesday night I whipped together a prototype of an application to test some architectural changes and make delivering large amounts of data to a client easier for both of us. The end result was to be a centralized datastore for all of our apps that could be accessed via a very simple API.
Testing went well and then the flood gates opened. Several hundred thousand requests in the opening hour had generated almost 60GB of data in Mongo. While every aspect of the system functioned better than was expected (especially for a late night prototype) the amount of data being generated so quickly was alarming.
When trying to implement the zlib compression the night before Mongo threw fits and I did not have time to deal with it. But now was the time. Trying to find the answer to the errors I was getting was tough. The app itself uses the Mongoid ORM for Rails and the background workers use the Mongo Driver for Ruby. The deflate happens on the background workers and the inflate in the Rails app.
Let’s start with the easy change, Mongoid:
field :large_data, :type => binary
That simply tells Mongoid to expect a binary object and insert as such.
The more difficult issue I ran into was actually doing the insert with the Mongo Driver in the Ruby scripts. I simply wasn’t looking for the right thing. What I needed to do was convert my zlib binary into a new BSON Binary to be stored in Mongo.
Then simply call the standard Mongo Insert command.
The last piece to this was back on the Rails App, I needed to inflate and return this data. The caveat that I found was needing to turn the BSON Binary to a string before trying to inflate it.
The end result was a system that was still keeping up with demand, but was now putting away far less data. My results were a 50K JSON string down to 12.5K and a 300K HTML file down to 72K.
Hope this helps anyone looking to squeeze a little more out of their storage solution.