I am going to be up in Portland for the OSCON Data tracks this upcoming week. Are you going?
Last Wednesday night I whipped together a prototype of an application to test some architectural changes and make delivering large amounts of data to a client easier for both of us. The end result was to be a centralized datastore for all of our apps that could be accessed via a very simple API.
Testing went well and then the flood gates opened. Several hundred thousand requests in the opening hour had generated almost 60GB of data in Mongo. While every aspect of the system functioned better than was expected (especially for a late night prototype) the amount of data being generated so quickly was alarming.
When trying to implement the zlib compression the night before Mongo threw fits and I did not have time to deal with it. But now was the time. Trying to find the answer to the errors I was getting was tough. The app itself uses the Mongoid ORM for Rails and the background workers use the Mongo Driver for Ruby. The deflate happens on the background workers and the inflate in the Rails app.
Let’s start with the easy change, Mongoid:
field :large_data, :type => binary
That simply tells Mongoid to expect a binary object and insert as such.
The more difficult issue I ran into was actually doing the insert with the Mongo Driver in the Ruby scripts. I simply wasn’t looking for the right thing. What I needed to do was convert my zlib binary into a new BSON Binary to be stored in Mongo.
Then simply call the standard Mongo Insert command.
The last piece to this was back on the Rails App, I needed to inflate and return this data. The caveat that I found was needing to turn the BSON Binary to a string before trying to inflate it.
The end result was a system that was still keeping up with demand, but was now putting away far less data. My results were a 50K JSON string down to 12.5K and a 300K HTML file down to 72K.
Hope this helps anyone looking to squeeze a little more out of their storage solution.
I don’t even get involved in the VIM vs EMACS flame wars. Why? Because I am one of the people that can unify them. I love a good IDE. I am willing to put up with the “bloat” to gain the benefits a well built IDE can bring. Over the years I tried many of them and kept going back to Eclipse, even if it left me unsatisfied. Last year I started using RubyMine from JetBrains for Rails development and never looked back to Textmate or RadRails. Recently, I started writing more Flex and Android code, as well as, maintaining older Coldfusion applications. This drove me back into Eclipse, but left me wanting a RubyMine like alternative.
After some searching I found IntelliJ (another JetBrains product) that actually worked with Flex, Android,
Coldfusion, Ruby and Rails. I have now used it IntelliJ for all of my development for that last few months and will not be going back to Eclipse anytime soon. While it is heavy, it runs rings around both Ganymede and Helios on my Macbook Air.
Support from JetBrains has been awesome. They have been very responsive to all of my questions (although they could have been avoided if they had better documentation) and are constantly updating their various IDEs. Which is another thing, they have standalone IDEs for PHP, Python, Ruby/Rails, .net, Java and even basic HTML/CSS. While all appear to be built of the granddaddy IntelliJ, their developers are great about building IDE specific functionality into plugins for IntelliJ. This allows one of my favorite features, creating modules for different languages in one project. So now, can have my Robotium Android Tests, my Android app and the web app that provides an api all open in one IDE that is truly useful across the different languages.
It also supports hooks into TeamCity (their CI Server) and YouTrack (their bug tracking software) which is a feature I am looking to try out next.
If you are looking for an alternative I would recommend giving JetBrains a try.