Matt Connolly's Blog

my brain dumps here…

Tag Archives: internet

OpenIndiana – running openvpn as a service

Here’s a gist for the XML manifest to run openvpn as a service:

It expects that there is an openvpn config file at /etc/openvpn/config which, you’ll need to configure with your settings, certificates, etc.

If you configure it to run a tap interface then bonjour advertising will work over the link, which is great if you want time machine or other bonjour services to work to an OpenIndiana server from a mac connecting from anywhere with openvpn.

When rate limiting your server more than doubles your server output…

At work, we’ve had a few customers mentioning to us that they’ve experienced slow downloads of data from our servers. When I’ve tested it at home, I’ve experienced the same thing, albeit not quite as bad. The best data rate I could get was about 30% of our server’s bandwidth.

In the last few days I’ve had several conversations with the network engineers at our ISP trying to identify exactly what the problem is. (Thank goodness we’re not with Telstra, if we had to wait for 3 times for a field technician to check if it was plugged in ok, we’d lose our business!)

After having the ISP’s network engineer change a few settings on their equipment, and doing some speed tests to a mini speed test site on their servers, we were still only able to utilise about 30% of our output bandwidth. Crapola.

He explained to me that our rate limiting was done by traffic policing at the switch on the other end of our link. After some reading about what traffic policing was, I’m led to understand that when your data rate is exceeded, packets are dropped. Shouldn’t be too much of a drama, TCP is designed to recover from packet loss, and it does a great job of it, right?. But, what does this packet loss mean to our actual throughput rates?

After making numerous other changes, none of which helped our bandwidth problem, I decided to try something else: rate limiting our server.

Our web files are served by apache running on a Mac, and luckily the Mac OS includes rate limiting controls in its built in firewall. (Great little tutorial here).

So with the `ipfw` command at the ready, I limited outgoing traffic on port 80 (http) to 80% of our bandwidth. And viola! Download rates rose more than double from 30% to 80% of our output limit!!

I never expected that rate limiting our server would cause our outgoing data rate to increase! Especially, more than double!

I’m sure there is a time and place for dropping packets (traffic policing), but it appears to be not working well for us. If anyone has more input on where this is appropriate or for suggestions of other alternatives, please let me know!

Redmine: Ruby on Rails without Virtual Hosts

I’ve been playing around with Redmine, a Rails app, on Mac OS X 10.6. It’s been quite a pain to get set up. It seems that the only easy way to get passenger to work is with virtual hosts. It’s not really obvious or intuitive how virtual hosts work. After a lot of RTFMing the apache docs, and passenger google group, I got it working. But Virtual hosts are still a pain, because you have to set them up in your /etc/hosts file more for every machine on the network, and there’s still a high likelihood of breaking other things (like php apps)

So, is it possible to install redmine (or any ruby on rails app) in a directory without using a separate virtual host just for the app, AND without taking down other php apps (drupal, phpmyadmin, etc) which are running on the same apache server?

Yes. After lots of trial and error, the solution is fairly simple, but has to be done in a specific way.

Step 1. Configure httpd.conf files

Add this to your httpd.conf file, (or an included file) before your users/other conf files are loaded:

LoadModule passenger_module /Library/Ruby/Gems/1.8/gems/passenger-2.2.15/ext/apache2/
PassengerRoot /Library/Ruby/Gems/1.8/gems/passenger-2.2.15
PassengerRuby /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby
PassengerEnabled Off

Follow the passenger install instructions to make sure that you’ve got the right paths and versions in there. “PassengerEnabled Off” in the global scope disables it by default. We’ll enable it just for where we want it.

Step 2. Setup a .conf file for the rails app

In my case, I’ve added a “redmine.conf” file which is imported by httpd.conf. In this file, I have:

<Location "/redmine">
    Options Indexes ExecCGI FollowSymLinks -MultiViews
    Order allow,deny
    Allow from all
    AllowOverride all

    PassengerEnabled On
    RailsBaseURI /redmine
    RailsEnv production

Here we enable Passenger, set the rails environment and the rails base URI.

Step 3. Setup a symlink

I had tried this with real directories and apache “Alias” commands to no luck. The passenger module docs does have instructions, but only for use with virtual hosts, if you ignore the virtual hosts bit, it works. On the mac, I have it set up like this:

$ # /Library/WebServer/Documents -- document root
$ # /Library/WebServer/Sites -- for rails and other apps.
$ cd /Library/WebServer/Documents
$ ln -s ../Sites/redmine/public redmine

Finally, a solution that other network machines can access without Virtual Host nightmare.

Faster internet at home

Just moved house and got the internet connection going. My router mustn’t be ADSL2 capable like I thought, but at least we’re running at the max speed for ADSL1:

Internode Easy Broadband Internet Plan

Internode have just released a new internet plan: “Internode Easy Broadband“. The plan is the first plan that counts data as downloads and uploads.

Simon Hackett, Managing Director of Internode, writes on the whirlpool forum that it’s all about effective comparisons with other major ISPs such as Telstra and Optus.

There’s a fair bit of noise on the forum about this, but I think it’s a good idea. As internet applications and highly efficient transport of data via peer-to-peer systems become more commonplace, the amount of uploads will increase dramatically in the future. It’s only fair that Internode be compared on even footing with the other big ISPs – and at what seems like much better value too.

Of course, if you don’t like it, no one is making you do it. That’s a fresh perspective compared to other ISPs, notable TPG, who change plans and terms on their customers without choice.

So, my opinion is: another win for Internode! Good work, Simon Hackett.

optus fusion = corporate greed

I nearly signed up for an Optus Yes Fusion earlier in the year, and it turned out that Optus was unable to get a connection to the line in the unit where I’m renting. At first, it seems like good value, but there’s something in the fine print that’s ridiculous.

When the data limit is reached on most plans, the ISP usually does one of two things:

  • speed limit for remainder of month, or
  • charge for excess usage.

I don’t know anyone who likes to pay excess usage. Particularly when the price for the excess is exorbitant. Most excess usage is around $6/GB (see whirlpool) but sometimes it’s as ridiculously high as $150/GB.

The Optus fusion plan gives you the worst of both. How? They charge you $300 for the first 2GB over the limit, and then shape speed.

So Optus reserve the right to make $300 off you for 24 months contract period if you go over your data limit in a month.

I’m glad to say that my ISP, internode, defaults to shaping but gives the choice to buy more data for the remainder of the month, at a reasonable price of $2.50/GB. I’m happy to have the choice, and the price is fair. That’s what you don’t get with Optus Fusion.

Say No to corporate greed, say No to Optus Yes Fusion.

Gmail spam filter broken

Spam has been a problem for many years now. Thankfully the problem seems to be decreasing in general. Check out the google-trend on searches for “spam filter

I’ve gone through various spam filtering options, and the one I have been most happy with is Gmail’s built in spam filtering. I’ve set up my other email addresses to forward to my Gmail account, where I can check with Mac Mail or  iPhone by IMAP, and, of course, from anywhere with the web.

But in the last week, several spam messages per day have been slipping through. A quick search on the web seems to show that there were problems with GMail spam filtering back in 2007, but not so much recently. Grrrr…

Downloading Youtube videos

Quite a few times I’ve wanted to watch a youtube video and been caught without an internet connection…

Here’s a great trick for users of Safari, thanks to a this page. Also thanks to a wikipedia article, you can append &fmt=22 to the end of the URL to force youtube to go straight to the HD version of the video.

So, here’s my steps:

  1. Find a youtube video.
  2. Add &fmt=22 to the end of the URL and reload the page – if it’s not already in HD.
  3. Wait a moment for the video to start playing.
  4. Press Cmd+Option+A to bring up the Activity window.
  5. Look for the biggest file there in the list…. something like this:



    Safari Activity window

    Safari Activity window

  6. Double click that entry.
  7. Voila. It’s downloading into Safari’s Downloads folder.

If you get the HD version, it will download automatically as a .mp4 file, which is great because Quicktime can play it out of the box without needing any silly FLV addons.

Oh… and Youtube HD videos actually are HD: 1280 x 720 resolution!!


Viruses Infecting the World

I caught part of a story on 60 minutes last night called “The Enemy Within” about computer viruses attacking computers all over the world. One part of the story was the CBS news network in the US where they had discovered – much to their amazement – that viruses had infected some of their computers.

I also noticed that all of the computers they showed on the story were running a Microsoft operating system.

Thank goodness I use an Apple Mac. When all of the dogs in the neighbourhood are sick…. get a cat.

spam spam and more spam

Helping out with building a website for a friend at the moment, and you guessed it, the guestbook got filled with spam.

Fortunately it was easily removed because it was all from one email address, so that made it easy to remove all the posts in one big hit.

But it got me thinking about how to stop it.

Here’s a few things I’m going to try:

  1. Only accept posts where I can verify that the referrer is my own site.
  2. Automatically block posts where more than a certain number of posts happen in a day from the same IP address  (let’s say 5 per day).
  3. Use the API from this site:
  4. Perhaps use Captcha from here:

I was pretty impressed with the Stop Forum Spam site – the guy spamming mine was on that list already. I guess the more sites use anti-spam databases, the harder it is for spammers to get their stuff out there.

Any other ideas are welcome!