Matt Connolly's Blog

my brain dumps here…

Tag Archives: ruby

A year with ZeroMQ + Ruby

It was a year ago since my first commit to rbczmq, a Ruby bindings gem for ZeroMQ, which wraps the higher level CZMQ library.

I don’t recall exactly how I first heard about ZeroMQ, but needless to say, I got interested and started reading up on it. At the time, I discovered there were three gems for using ZeroMQ in ruby:

  • zmq – the original official zmq library. This is old. It tracks ZeroMQ version 2 (currently version 4) and has not been updated for nearly 3 years.
  • ffi-rzmq – a binding using FFI, which is compatible with Ruby, Rubinius and JRuby.
  • rbczmq – a native extension (no JRuby support) binding using the czmq library.

I didn’t spend a lot of time with the zmq gem since it was so old. The ffi-rzmq gem worked well, but didn’t feel very ruby like in its interface. For example, when receiving a message with a C method you would pass a buffer as a parameter to receive the contents of the message and the returned value would be an error code. This is quite un-ruby-like: I would expect receive to return the received data or raise an exception for the error code in keeping with built in ruby socket/file i/o calls.

So I started to explore rbczmq. Initially, I wasn’t so interested in the CZMQ wrapping part, I just wanted something that was more ruby-like to use. And it was. And it was faster. And the CZMQ part actually helps too.

In ZeroMQ each message part, or “frame” is considered a message. So when you read multi-frame messages in ZeroMQ you need to check the “more” flag, and read the next part. CZMQ wraps this as a single message with a number of “frames”. rbczmq neatly exposes these as Ruby classes: ZMQ::Message and ZMQ::Frame. You can still send and receive raw frames (as strings), but the class is a nice wrapper.

And to boot, it turns out that it was way faster than the ffi gem. I seem to have lost track of the comparison I did, but I recall it was convincing.

What’s changed?

During this year, rbczmq has received a number of updates and new features, major ones including:

  • Upgrade to ZeroMQ 4
  • Upgrade to CZMQ 2
  • Support for SmartOS platform
  • Fixes to memory management

Major things still to do:

  • Add support for new authentication interface.
  • Ship binary gems (like libv8) to save compilation time on deploy / install.

Hard bits

The hardest bit of work I contributed to this project was fixing bugs in the memory management. In particular, CZMQ has specific rules about ownership of memory. Ruby is a garbage collected environment, which also has its own set of rules about ownership of memory. The two do not match.

Most calls to ZeroMQ are done outside of the Ruby “GVL” (global lock) which allows the ruby VM to continue processing ruby code in other threads while one is doing a synchronous/blocking read on a socket, for example. When you combine this with Ruby threads, things can get hairy. The solution was two-fold:

  1. Use an ownership flag. When ownership was known to be transferred to ZeroMQ, mark the ruby object as no longer owned by Ruby. This meant that Ruby garbage collection callbacks would know if they were ultimately responsible for freeing memory used by an object. There was also some tricky interplay between contexts and sockets, since a socket is owned by a context, and destroying a context also destroys the sockets, so a socket is only owned by ruby if it has not been closed and the context has not been destroy.
  2. A socket closing mutex: Socket closing and context closing are asynchronous. If a socket is still open when a context is destroyed, then all sockets belonging to that socket will be closed. This happens outside the Ruby GVL, which means that a race condition exists where the Ruby garbage collector may collect the socket while it is still closing. ZeroMQ socket close is not threadsafe, so a mutex was the only solution to make this safe.

Using a mutex for socket close may result in a performance hit for an application which opens and closes sockets rapidly, but from what I understand, that is a bad thing to do anyway.

Looking forward

I have a few projects in the wild now using the rbczmq gem, and am very happy with its stability and performance. I haven’t used all of the APIs in Anger (such as Loops or Beacons), but I’m sure the time will come. I look forward to another year of contributions to this project to keep it up to date with what’s happening in the ZeroMQ and CZMQ projects.

I’d love to hear from other people using this gem, so give me a shout!

Advertisements

Ruby Tuples (and file scanning)

I enjoyed Andrew Pontious’s recent episode of Edge Cases podcast talking about tuples. I’m doing a lot of Ruby these days, so I thought I’d add my two cents worth about using tuples in Ruby.

It’s true that there is no separate tuple class, but Ruby arrays can do everything that tuples in Python can do.

To assign two variables, you can do:

a, b = 1, 2

Which is equivalent to:

a, b = [1, 2]

Which is equivalent to:

a = 1
b = 2

Elements not present are treated as nil, so a, b = 1 assigns the value 1 into a and nil into b.

Functions can return arrays like so:

def f(x)
  [1, 2]
end

def g(x)
  return 1, 2
end  

The Ruby way to iterate a list of items is with the each method that takes a block:

[1,2,3].each { |x| puts x }

Calls the block 3 times with x having the values 1, 2 and 3 from the list. If these items are themselves arrays, then the items in those sub-arrays can be expanded out into the block variables, like so:

[[1,2], [3,4], [5,6]].each { |a, b| puts "a = #{a}, b = #{b}" }
# outputs:
# a = 1, b = 2
# a = 3, b = 4
# a = 5, b = 6

Hashes can also be enumerated this way, where each key value pair is represented as an array with 2 items:

{a: 1, b: 2, c: 3}.each { |key, value| puts "#{key} => #{value}"}
# outputs:
# a => 1
# b => 2
# c => 3

Python’s list comprehension is really great. Where in python you might write the following to select only items from a list given some condition determined by the function g(x), and return the value f(x) for those values:

results = [f(x) for x in source_list if g(x)]

Ruby achieves the same with select and map methods, which can be composed in either order according to your needs. The Ruby equivalent would be:

results = source_list.select { |x| g(x) }.map { |x| f(x) }

Python’s list comprehension can only do these two things, in that order. By making the select step and the map steps separate in Ruby, they can be composed in any order. To reverse the map and select order in Ruby:

results = source_list.map { |x| f(x) }.select { |x| g(x) }

This is not so easy in python:

results = [y for y in [f(x) for x in source_list] if g(y)]

Ruby also contains many more useful operations that can be done on any enumerable sequence (for example readlines from a file), just take a look at the Enumerable module docs: http://www.ruby-doc.org/core-2.1.0/Enumerable.html

So I’ve got a bit off the tuple track, so I’ll finish with yet another tangent relating to the podcast episode: Deep searching a file hierarchy for files matching an extension. Try this out for concise:

Dir.glob("**/*.json")

To return an array of all the .json files anywhere under the current directory. Ruby is full of little treasures like this.

I used to do quite a bit of scripting in Python until I learnt Ruby. I’ve never looked back.

Network latency in SmartOS virtual machines

Today I decided to explore network latency in SmartOS virtual machines. Using the rbczmq ruby gem for ZeroMQ, I made two very simple scripts: a server that replies “hello” and a benchmark script that times how long it takes to send and receive 5000 messages after establishing the connection.

The server code is:

require 'rbczmq'
ctx = ZMQ::Context.new
sock = ctx.socket ZMQ::REP
sock.bind("tcp://0.0.0.0:5555")
loop do
  sock.recv
  sock.send "reply"
end

The benchmark code is:

require 'rbczmq'
require 'benchmark'

ctx = ZMQ::Context.new
sock = ctx.socket ZMQ::REQ
sock.connect(ARGV[0])

# establish the connection
sock.send "hello"
sock.recv

# run 5000 cycles of send request, receive reply.
puts Benchmark.measure {
  5000.times {
    sock.send "hello"
    sock.recv
  }
}

The test machines are:

* Mac laptop – server & benchmarking
* SmartOS1 (SmartOS virtual machine/zone) server & benchmarking
* SmartOS2 (SmartOS virtual machine/zone) benchmarking
* Linux1 (Ubuntu Linux in KVM virtual machine) server & benchmarking
* Linux2 (Ubuntu Linux in KVM virtual machine) benchmarking

The results are:

Source      Dest        Connection      Time          Req-Rep/Sec
------      ----        ----------      ----          --------
Mac         Linux1      1Gig Ethernet   5.038577      992.3
Mac         SmartOS1    1Gig Ethernet   4.972102      1005.6
Linux2      Linux1      Virtual         1.696516      2947.2
SmartOS2    Linux1      Virtual         1.153557      4334.4
Linux2      SmartOS1    Virtual         0.952066      5251.8
Linux1      Linux1      localhost       0.836955      5974.0
Mac         Mac         localhost       0.781815      6395.4
SmartOS2    SmartOS1    Virtual         0.470290      10631.7
SmartOS1    SmartOS1    localhost       0.374373      13355.7

localhost tests use 127.0.0.1

SmartOS has an impressive network stack. Request-reply times from one SmartOS machine to another are over 3 times faster than when using Linux under KVM (on the same host). This mightn’t make much of a difference to web requests coming from slow mobile device connections, but if your web server is making many requests to internal services (database, cache, etc) this could make a noticeable difference.

ZeroMQ logging for ruby apps

I’ve been thinking for a while about using ZeroMQ for logging. This is especially useful with trends towards micro-services and scaling apps to multiple cloud server instances.

So I put thoughts into action and added a logger class to the rbczmq gem that logs to a ZeroMQ socket from an object that looks just like a normal ruby logger: https://github.com/mattconnolly/rbczmq/blob/master/lib/zmq/logger.rb

There’s not much to it, because, well, there’s not much to it. Here’s a simple app that writes log messages:

Log Writer:

require 'rbczmq'
require_relative './logger'
require 'benchmark'
ctx = ZMQ::Context.new
socket = ctx.socket(ZMQ::PUSH)
socket.connect('tcp://localhost:7777')
logger = ZMQ::Logger.new(socket)
puts Benchmark.measure {
 10000.times do |x|
 logger.debug "Hello world, #{x}"
 end
}

With benchmark results such as:

  0.400000   0.220000   0.620000 (  0.418493)

Log Reader:

And reading is even easier:

require 'rbczmq'
ctx = ZMQ::Context.new
socket = ctx.socket(ZMQ::PULL)
socket.bind('tcp://*:7777')
loop do
 msg = socket.recv
 puts msg
end

Voila. Multiple apps can connect to the same log reader. Log messages will be “fair queued” between the sources. In a test run on my 2010 MacBook Pro, I can send about 13000 log messages a second. I needed to run three of the log writers above in parallel before I maxed out the 4 cores and it slowed down. Each process used about 12 MB RAM. Lightweight and fast.

Log Broadcasting:

If we then need to broadcast these log messages for multiple readers, we could easily do this:

require 'rbczmq'
ctx = ZMQ::Context.new
socket = ctx.socket(ZMQ::PULL)
socket.bind('tcp://*:7777')
publish = ctx.socket(ZMQ::PUB)
publish.bind('tcp://*:7778')
loop do
 msg = socket.recv
 publish.send(msg)
end

Then we have many log sources connected to many log readers. And the log readers can also subscribe to a filtered stream of messages, so one reader could do something special with error messages, for example.

Building ruby 2.0.0 for Mac

After downloading ruby source code, use this:

CC=/usr/bin/clang ./configure ...

This also works with RVM

CC=/usr/bin/clang rvm install ruby-2.0.0

Comparing Amazon EC2 to Joyent SmartOS

Recently, I’ve been using Amazon web services (EC2, especially) quite a bit more at work. At home, I still use OpenIndiana, so I’ve been really interested in comparing Joyent’s offerings against Amazons first hand. In particular, my tasks I have in Amazon’s cloud always feel CPU bound, so I’ve decided to do a comparison of just CPU performance, giving some context to Amazon’s jargon ECU (Elastic Compute Unit) by comparing it with a Joyent SmartOS instance, as well as my MacBook Pro, iMac and OpenIndiana server.

So I spun up a Joyent Micro SmartOS instance and an Amazon EC2 linux Micro and small instances.

Joyent startup is impressive. The workflow is simple and easy to understand. I chose the smartosplus64 machine just because it was near the top of the list.

Amazon startup is about what I’ve learned to expect. Many more pages of settings later we’re up and running.

Installing ruby 1.9.3 with RVM

Ubuntu linux has fantastic community support, and many packages just work out of the box. Following the RVM instructions was easy to get it installed.

SmartOS, like OpenIndiana often requires a bit more work.

I made this patch to get ruby to compile: https://gist.github.com/4104287
Thanks to this article: http://www.hiawatha-webserver.org/forum/topic/1177

A Simple Benchmark

Here’s a really quick ruby benchmark, that will sort 5 million random numbers in a single thread:

require 'benchmark'

array = (1..5000000).map { rand }
Benchmark.bmbm do |x|
  x.report("sort!") { array.dup.sort! }
  x.report("sort") { array.dup.sort }
end

I also tested my MacBook Pro, my iMac and my Xeon E3 OpenIndiana server to get some perspective.

Here’s the results:

Machine Benchmark (sec)
MacBook Pro 2.66gHz core i7 (2010) 86.99
iMac 24″ 2.5GHz core i5 (2012) 19.30
Xeon E3-1230 3.2GHz OpenIndiana server 35.57
Joyent EXTRA SMALL SmartOS 64-bit 55.10
Amazon MICRO Ubuntu 64-bit 361.42
Amazon SMALL Ubuntu 64-bit 123.69

Snap. Amazon is *SLOW*! And iMac the surprise winner!

And so what is this Elastic Compute Unit (ECU) jargon that Amazon have created? Since the Amazon Small instance is 1 ECU, we can reverse measure the others into compute units. And by converting their hourly price to a monthly price (* 24 hours * 365.25 days / 12 months), we can also determine the price per ECU:

Machine Benchmark (sec) $/hour ECUs $/month/ECU
MacBook Pro 2.66gHz core i7 (2010) 86.99 1.422
iMac 24″ 2.5GHz core i5 (2012) 19.30 6.409
Xeon E3-1230 3.2GHz OpenIndiana server 35.57 3.477
Joyent EXTRA SMALL SmartOS 64-bit ruby 55.10 $0.03 2.245 $9.76
Amazon MICRO Ubuntu 64-bit 361.42 $0.02 0.342 $42.69
Amazon SMALL Ubuntu 64-bit 123.69 $0.07 1.000 $47.48

Snap. Amazon is *EXPENSIVE*!

My laptop with 4 threads could do the CPU work of 5.7 small amazon EC2 instances, worth $270/month. And my Xeon box with 8 threads could do the work of 27.8 small instances, worth $1320/month. (I built the whole machine for $1200!!). Mind you, these comparisons are on the native operating system, but if you’re running a machine in house this is an option, so might be worth consideration.

I’ve read that comparing SmartOS to Linux in a virtual machine isn’t a fair comparison because you’re not comparing apples with apples; one is operating system level virtualisation (Solaris Zones), the other is a full virtual machine (Xen Hypervisor). Well tough. All I need to do is install tools and my code and get work done. And if I can do that faster then that is a fair comparison.

Conclusion

Joyent CPU comes in more than 4 times cheaper than Amazon EC2.

Amazon need to lift their game in terms of CPU performance. They offer a great service that obviously extends far beyond a simple CPU benchmark. But when you can get the same work done in Joyent significantly faster for the comparable price, you’ll get far more mileage per instance, which is ultimately going to save the dollars.

 

EDIT: 19/11/12: Joyent’s machine is called “Extra Small”, not Micro as I originally had it.

Passenger apache module for OpenIndiana

I did a bit of hunting and made some patches to the ‘passenger’ gem so that it’s apache module would compile for OpenIndiana. Changes are in my github fork:

https://github.com/mattconnolly/passenger

And I just noticed that one of the fixes was in a patch in Joyent’s SmartOS instructions for using passenger.

I tested this also on a VM guest installation of Solaris 11 Express, and it worked too. I’d be interested to hear if it works for others on OpenIndiana, Solaris or SmartOS.

So with updates to rvm, latest version of ruby and with this patched version of passenger, I’m finally good to go to deploy rails apps on OpenIndiana. Woot!

Installing mysql2 gem on OpenIndiana

Quick one, with mysql 5.1 installed from standard OpenIndiana package repository, the ruby mysql2 gem can be installed with this command:

$ gem install mysql2 -v '0.2.18' -- --with-mysql-dir=/usr/mysql/5.1 --with-mysql-include=/usr/mysql/5.1/include/mysql

This one requires /usr/gnu/bin in the front(ish) of your path, so you may need an `export PATH=/usr/gnu/bin:$PATH` before you do this.

Enjoy.

 

[UPDATE]

This also works for the latest version ‘0.3.11’:

$ gem install mysql2 -v '0.3.11' -- --with-mysql-dir=/usr/mysql/5.1 --with-mysql-include=/usr/mysql/5.1/include/mysql

[UPDATE 2 – ruby-2.0.0]

Ruby 2.0.0 compiles as a 64-bit executable, which means another bit needs to be added to the command line options:

$ gem install mysql2 -v '0.3.11'-- --with-mysql-dir=/usr/mysql/5.1/ --with-mysql-include=/usr/mysql/5.1/include/mysql --with-mysql-lib=/usr/mysql/5.1/lib/amd64/mysql

TTCP, in Ruby

I’ve used the TTCP tcp test program from time to time, and am at present looking at some networking in Ruby. So why not have a look at porting that to ruby? So I did.

This has been built as a gem, which an executable ‘ttcp’ that will install in your gem’s bin folder. You can get the gem from here: http://rubygems.org/gems/ttcp

Or type: `gem install ttcp` at your terminal.

Source code is released under MIT license, and available on github: https://github.com/mattconnolly/ttcp

So far, I’ve tested it out on Mac and OpenIndiana in ruby 1.8.7, 1.9.3 and JRuby 1.6.5. I can’t seem to run the tests in JRuby, but it appears to work anyway.

Enjoy.

Testing with cucumber – session state

I was recently at the Brisbane Ruby on Rails meetup where one discussion was about the speed of testing. I’ve also read in many places that rails apps are terribly slow at testing.

In a recent rails app I’m working on, I wrote several features describing how I want the user sign up sign in and sign out to function. However following this, my next features about a user’s data also required me to repeat the user sign in step in every scenario. While it’s not too hard for each scenario to start with “given I am signed in (as a role)”, or put that step in. “Background:” section, it does mean that the capybara steps are repeated for every single scenario.

So, I started to explore how a scenario can skip the sign in web steps, and it appears it cannot be done (easily). This stack overflow discussion is very relevant: http://stackoverflow.com/questions/1271788/session-variables-with-cucumber-stories

I do find it very odd that cucumber steps can access and modify the rails database (eg load fixtures / create objects with factories and save them) but cannot access the session state. The same applies to rspec request specs.

I guess that’s just the way it is, but in the interest of speeding things up, I’d like to know if there’s any special reasons for this limitation that prevents us from speeding up these tests.