Matt Connolly's Blog

my brain dumps here…

Category Archives: Uncategorized

Rails: beware a column named ‘type’

It’s been said many times to not use the column name ‘type’, because rails uses this to define the subclass of a model that should be loaded. (great feature, by the way).

But if you have an existing database with a column called “type” and you want to interface to that database with active record, this is a great tip (thanks to this post):

class MyModel < ActiveRecord::Base
  self.inheritance_column = "inheritance_type"
end
Ahh… back to work.
Advertisements

Xcode testing AFNetwork Operation callback blocks

Just recently, I was writing some tests in Xcode for some HTTP requests using the AFNetwork library. Previously I’ve used the ASIHTTPRequest library, but in this particular project, I’ve chosen to use AFNetworking for its JSON support.

Since the requests run asynchronously we need a way to wait for the operation to complete. This is easy:

- (void)testRequest
{
    MyHTTPClient* api = [MyHTTPClient sharedInstance]; // subclass of AFHTTPClient
    NSDictionary* parameters = [NSDictionary dictionary]; // add query parameters to this dict.
    __block int status = 0;
    AFJSONRequestOperation* request = [api getPath:@"path/to/test"
                                        parameters:parameters
                                           success:^(AFHTTPRequestOperation *operation, id responseObject) {
                                               // success code
                                               status = 1;
                                               NSLog(@"succeeded");
                                           } failure:^(AFHTTPRequestOperation *operation, NSError *error) {
                                               // failure
                                               status = 2;
                                               NSLog(@"failed");
                                           }];
    [api enqueueHTTPRequestOperation:request];
    [api.operationQueue waitUntilAllOperationsAreFinished];

    STAssertTrue([request isFinished], @"request finished");
    STAssertEquals(request.response.statusCode, 200, @"request returned 200 OK");
    STAssertEquals(status, 1, @"success block was executed");
}

This is great for testing that the request completes, and verifying its status. But if we need to test anything in the success or failure callbacks, the last test will fail with `status == 0`.

This is because AFNetwork processes its response in a background thread, and the final success or failure block callback is dispatched asynchronously from there to a specific queue, which unless provided is the main queue. This means that the block won’t get called until *AFTER* the test code has completed.

Putting in some kind of a lock causes a deadlock, since the test is running on the main thread, and the block callback never gets an opportunity to run. The solution is to manually run the main threads runloop until the callbacks have been processed.

Here’s my solution:

- (void)testRequest
{
    MyHTTPClient* api = [MyHTTPClient sharedInstance]; // subclass of AFHTTPClient
    NSDictionary* parameters = [NSDictionary dictionary]; // add query parameters to this dict.
    __block int status = 0;
    AFJSONRequestOperation* request = [api getPath:@"path/to/test"
                                        parameters:parameters
                                           success:^(AFHTTPRequestOperation *operation, id responseObject) {
                                               // success code
                                               status = 1;
                                               NSLog(@"succeeded");
                                           } failure:^(AFHTTPRequestOperation *operation, NSError *error) {
                                               // failure
                                               status = 2;
                                               NSLog(@"failed");
                                           }];
    [api enqueueHTTPRequestOperation:request];
    [api.operationQueue waitUntilAllOperationsAreFinished];

    while (status == 0)
    {
        // run runloop so that async dispatch can be handled on main thread AFTER the operation has 
        // been marked as finished (even though the call backs haven't finished yet).
        [[NSRunLoop currentRunLoop] runMode:NSDefaultRunLoopMode
                                 beforeDate:[NSDate date]];
    }

    STAssertTrue([request isFinished], @"request finished");
    STAssertEquals(request.response.statusCode, 200, @"request returned 200 OK");
    STAssertEquals(status, 1, @"success block was executed");
}

This addition will continually pump that run loop which allows AFNetwork’s async dispatch of the block to the main queue to execute, and hey presto! We now have a test that can also verify code in the success (or failure) completion blocks of an AFNetwork request operation.

OpenIndiana – installing ImageMagick and Rmagick gem

I found that the Rmagic gem wouldn’t install with the standard OpenIndiana package for ImageMagick because it was too old, and the one installed from the SFE repository didn’t seem to work. But installing ImageMagick from source (version 6.7.6) was pretty straight forward.

Only catch here was that because I installed it in /opt/local, the Magick-config tool couldn’t find its package config (.pc) files. i.e., I was getting this:

$ /opt/local/bin/Magick-config --cflags
Package MagickCore was not found in the pkg-config search path.
Perhaps you should add the directory containing `MagickCore.pc'
to the PKG_CONFIG_PATH environment variable
No package 'MagickCore' found

The Rmagick extension needs to find `Magick-config` in PATH, and that needs to find it’s package config files. So:

$ export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/opt/local/lib/pkgconfig
$ export PATH=$PATH:/opt/local/bin
$ gem install rmagick -v '2.13.1'
Building native extensions. This could take a while...
Successfully installed rmagick-2.13.1
1 gem installed
Installing ri documentation for rmagick-2.13.1...
Installing RDoc documentation for rmagick-2.13.1...

Done.

Amazon EC2: Amazon Linux vs Ubuntu Linux

Just for curiosity’s sake, I decided to boot up an instance of each, install apache, mysql, ruby and ruby gems and see how much disk and memory was used by each.

Using the preset 32-bit installs (each on an 8GB EBS volume), the disk usage was:

Ubuntu: 1.1GB / Amazon-Linux: 1.2GB

And after a clean boot, free memory was:

Ubuntu: 560128k free / Amazon-Linux: 541844k free.

Not much in it, but Ubuntu wins on both.

Only other thing I noticed so far as was the repositories for Amazon linux seemed faster to access than Ubuntu’s ones. Not a big deal though, once everything is installed.

I’m very interested to see if Ubuntu’s Landscape offers better insights into an Ubuntu instance than Amazon’s own cloud watch metrics… I’ll have a play soon.

RVM in OpenIndiana revisited

I just went through the process of setting up RVM from scratch in a zone in OpenIndiana. I wrote previously about the gnu path problem, but didn’t write down any of the OpenIndiana dependencies. They are:

Installing RVM

To install RVM, you will need:

  1. curl certificates installed into /etc/curl/curlCA (not installed by default in a new zone)
  2. The following packages are required to install rvm:
    `# pkg install archiver/gnu-tar text/gawk text/gnu-grep pkg:/sfe.openindiana.org/developer/versioning/git`
    Note: This requires adding the “SFE” (Spec Files Extra) repository as a known publisher. If this is already set up in your global zone then the new non-global zone should inherit it.
  3. Set the PATH environment variable to include `/usr/gnu/bin` at the front, i.e.: in .bashrc
    `export PATH=/usr/gnu/bin:$PATH`

Building Ruby 1.8.7

You will additionally need the following packages installed to build ruby 1.8.7 using rvm:

`# pkg install runtime/gcc text/gnu-patch developer/library/lint system/header system/library/math/header-math file/gnu-coreutils`

Et voilá!

Installing Ruby Enterprise Edition also worked for me. But it’s installer requires another ruby to be installed first.

Problems

So far the following rubies / gems have had problems:

  • Ruby 1.9.3 does not build correctly. (1.9.2 does, though) See the illumos bug tracker: https://www.illumos.org/issues/1587
  • Passenger gem installs, but apache module fails to compile.

 

So it’s not yet a perfect world

Testing with cucumber – session state

I was recently at the Brisbane Ruby on Rails meetup where one discussion was about the speed of testing. I’ve also read in many places that rails apps are terribly slow at testing.

In a recent rails app I’m working on, I wrote several features describing how I want the user sign up sign in and sign out to function. However following this, my next features about a user’s data also required me to repeat the user sign in step in every scenario. While it’s not too hard for each scenario to start with “given I am signed in (as a role)”, or put that step in. “Background:” section, it does mean that the capybara steps are repeated for every single scenario.

So, I started to explore how a scenario can skip the sign in web steps, and it appears it cannot be done (easily). This stack overflow discussion is very relevant: http://stackoverflow.com/questions/1271788/session-variables-with-cucumber-stories

I do find it very odd that cucumber steps can access and modify the rails database (eg load fixtures / create objects with factories and save them) but cannot access the session state. The same applies to rspec request specs.

I guess that’s just the way it is, but in the interest of speeding things up, I’d like to know if there’s any special reasons for this limitation that prevents us from speeding up these tests.

Telstra Bigpond Cable crappy Netgear Router part 6

So I got home from work and my wife was downloading something onto her computer, I plugged in my laptop and boom… Web pages timing out. Massive SYN_SENT in my netstat output. No problem connecting to the router, I could see that the router had a good signal strength, and the line definitely hadn’t dropped out because the other computer was downloading. I could connect to my nas and do a backup there.

The only piece of the network that wasn’t working correctly was the Netgear router, with the symptoms described by others when the NAT (Network Address Translation) table is full and the router simply cannot handle any more connections.

So, on the phone to Telstra Bigpond technical support again to see what can be done about it. Nothing, it turns out, because all they can do is send out a Field Technician to check the line (which has happened 3 times now), or replace the modem with the same model. Not good enough.

They referred me to sales. Odd, but at least I’m going somewhere else because the people on the tech support line clearly don’t have any capability to deal with my issue. Logically this makes some sense – the sales team have provisioned substandard equipment, what can support do about it?

So I got on the phone to sales to continue complaining about this. After explaining the situation yet again – I need to make a recording to play down the line to them – they decided to send me a replacement modem. I insisted that they needed to provide a “better” modem, because another Netgear would simply result in the same problems (we’ve already been down that path).

The girl tells me that she’s going to send me a Thomson modem. I’d only heard of Thomson making ADSL modems, and she *couldn’t* tell me the model number which is quite concerning. I’ve since looked up Thomson (bloody hard to find anything on Technicolor’s crappy website) and they do indeed make cable modems.

So fingers crossed, this replacement is a cable modem and that it’s router works properly.

Failing that, I can feel a letter to the TIO coming on.

Oh, and if anyone reading this has used a Thomson Cable modem, especially if provisioned by Telstra, I’d love to hear how it went!

When rate limiting your server more than doubles your server output…

At work, we’ve had a few customers mentioning to us that they’ve experienced slow downloads of data from our servers. When I’ve tested it at home, I’ve experienced the same thing, albeit not quite as bad. The best data rate I could get was about 30% of our server’s bandwidth.

In the last few days I’ve had several conversations with the network engineers at our ISP trying to identify exactly what the problem is. (Thank goodness we’re not with Telstra, if we had to wait for 3 times for a field technician to check if it was plugged in ok, we’d lose our business!)

After having the ISP’s network engineer change a few settings on their equipment, and doing some speed tests to a mini speed test site on their servers, we were still only able to utilise about 30% of our output bandwidth. Crapola.

He explained to me that our rate limiting was done by traffic policing at the switch on the other end of our link. After some reading about what traffic policing was, I’m led to understand that when your data rate is exceeded, packets are dropped. Shouldn’t be too much of a drama, TCP is designed to recover from packet loss, and it does a great job of it, right?. But, what does this packet loss mean to our actual throughput rates?

After making numerous other changes, none of which helped our bandwidth problem, I decided to try something else: rate limiting our server.

Our web files are served by apache running on a Mac, and luckily the Mac OS includes rate limiting controls in its built in firewall. (Great little tutorial here).

So with the `ipfw` command at the ready, I limited outgoing traffic on port 80 (http) to 80% of our bandwidth. And viola! Download rates rose more than double from 30% to 80% of our output limit!!

I never expected that rate limiting our server would cause our outgoing data rate to increase! Especially, more than double!

I’m sure there is a time and place for dropping packets (traffic policing), but it appears to be not working well for us. If anyone has more input on where this is appropriate or for suggestions of other alternatives, please let me know!

Telstra Bigpond Cable crappy Netgear Router part 5

When I previously rang Telstra, continuing my hunt for a replacement of Telstra Bigpond’s crappy Netgear CGD24N cable modem, I was told a field technician was required to replace the modem THREE TIMES before they would escalate the issue beyond the call centre to someone higher up in Telstra. Well guess what? The guy never showed up.

I guess this is some ploy of theirs to never get to the THREE TIMES so that they never have to escalate the issue. Bad form.

Last night I rang again, and at least this time was escalated to a senior person in the call centre. This guy, Ed, was pretty bright. But he still didn’t know anything about Network Address Translation as insisted that noise on the line was causing dropouts because of dynamic IP configuration. Yeah, right.

So now I have an appointment for another technician, this will be visit #2 on Friday morning. I hope this guy turns up.

Telstra Bigpond Cable crappy Netgear Router part 4

Today, I rang Bigpond support again about my issue with the Netgear CGD24N router being slow and intermittently having computers time out when accessing web pages. I’m not alone in this issue, simply do a search on the Whirlpool forum to see more people have the same problem.

The girl I spoke to was, as usual, in a foreign call centre. She wanted to check my wireless settings, signal status, etc. I politely obliged her for a while before insisting that she reference the previous information and escalate the issue to a higher level department.

ONLY NOW, I find out that there policy is only to escalate the issue if it persists after a field technician has been to my house THREE TIMES. How much of my time do I have to sacrifice from my work to convince them that there’s something wrong in the ROUTER.