The Amazon Free Tier-Year

As of now, this site ( is being hosted (for free) on the Amazon cloud. I signed up for the free year back on July 1st, 2014, and will need to arrange for a more permanent home after June of this year. I might stay with Amazon, and pay for the hosting of this small (virtual) server. It has worked well for these past six months, allowing me to learn about cloud-hosting. (I also created an instance of a virtual server on Digital Ocean, at


I Want Arrays

A little more than a year ago, I built FelixBlack, my main development machine.

Windows 8 had just recently been released and Microsoft was offering discount pricing. Additionally, storage and memory-- the keys to a fast machine-- were coming down in price. USB 3.0 was touted in the press, available on the new motherboards. Toshiba had an inexpensive external drive on the market, the Canvio, 3 terabytes, which was being offered on sale for a mere $99 (that magic price point). Even the SSD models from Intel were becoming affordable.

My local MicroCenter had stock, my credit card had room, and I had that old familiar feeling-- techno-lust.

Now, I am not an early adopter. I am, by nature, cautious, especially when it comes to my main development machine. I follow behind the early-adopters, waiting for that first substantial price-drop and also waiting, more importantly, for the bugs to shake out. But, as cautious as I am, when I go, I go Big...

I admit I went a little crazy. I spent more than $1100 over the course of a few daze. [That may not sound like much to corporate procurers, but I am a one-man self-funded perpetual-startup, and I'm always on a ♪Low Budget♫.].

I bought the motherboard, CPU, and memory from NewEgg, and visited the brick-and-mortar store to get my hands on a couple of those USB 3.0 drives, along with my copy of Windows 8. I relied on my bonepile for the case and PSU, and answered a Craig's List ad to pick up a cheap Radeon card and a used SSD.

When I was done putting it all together, I had a nice, even, 8.0 overall score on the WEI (Windows Experience Index).

I was running at 4.3 GHz, water-cooled, with 32GB of dual-channel DDR3 memory, and -- get this -- 21 terabytes of fast online storage.

How fast is my definition of "fast"?

Well, here's a screen shot of a 5GB file copy:


Now, that's fast, in my book. Of course, when I started computing, we had 8-inch floppies, and a hundred kilobytes per second was fast.

The Year Of Living Dangerously

There are many ways to build a fast computer. One way focuses on the CPU and motherboard. This can be expensive. A long time ago, I learned the short-cut to a fast, practical machine-- focus on I/O through-put.

So, when I built FelixBlack, I decided to live dangerously, and I joined all of my 3TB Canvio drives into striped arrays. That is, I went RAID Level 0 (zero) for speed. I threw caution to the wind. I became religious about making backups, developed the habit of adding redundant clones of all my important files, and I have been living high off this dangerous hog ever since.

Now, my drive sub-system is-- in reality-- not that fast. The 529 MB/s shown in the graphic is not from the raw drive speed, but is a side effect of the Write Cache implemented under Windows. With 32GB memory, "small files" (under a dozen gigabytes or so) blaze by at speeds from 300 to 500 MB/s, but longer copy operations will eventually saturate the cache. That kind of speed cannot be sustained forever. Here is a larger copy, a sequence of graphics to illustrate this, where I clone 32GB of data from an external USB 3.0 drive to the internal array...


Here you can see the initial speed is a nice sustained rate of 330 MB/s...


After a while, though, the speed drops, as the cache has been saturated. Now we are seeing the actual speed of the underlying drive subsystem.


This speed would be maintained (at this actual drive speed) all the way until the end of the copy, unless something or someone intervenes...


Here I pause the copy for a few moments. After the drives have completed their pending i/o, when I resume, the speed surges again. Pausing has the effect of flushing the saturated cache to disk, allowing Windows to blaze that (cached) copy operation again...

Felix Black is my main host for many differing applications. I have several other machines, running Windows 7, XP, ArchLinux, Fedora... but now many of those hardware platforms sit idle, because FB8 does such a great job of hosting virtual images that I hardly need to cold boot other hardware. In many cases, the virtual instance running on FB8 is faster and more responsive than the physical machine I am emulating.


K2 Problems

Before settling in with ProcessWire, I spent some time evaluating other CMS frameworks. For a few days, I experimented with Joomla, Gantry, and Wordpress, among others, and I also played with some of the various add-ons, extensions, themes and examples that are widely available.

I spent more time than I'd care to admit. It is easy to get swallowed up in the whole process of evaluating the capabilties of various systems.

Along the way, I uncovered some problems with each of the top CMS dogs. Today, I present just one example:

Herein lies another cautionary tale..,

K2 is kind of a framework-within-a-framework. It is designed to extend and add value and fuunctionality to Joomla! I read good things about K2, so  I added it to my list of evaluations (in for a penny, in for a pound). K2 adds support for uploading media files, and also offers some automatic extras -- graphics files will be replicated in various sizes, so that you, as a web server, can optimize and minimize the payload delivered to clients. Instead of sending that entire 1920x1080 graphic to a client with a smaller screen, for instance, you can easily send a quarter of that, because K2 will have created a smaller version, automagically.

I implemented a test site using this feature. I made it responsive to media queries, and uploaded this main graphic for my test article:


This image is a 693x454 capture from my Windows 8 desktop. The size of this PNG image is modest, only 14kb (partly because I run a high-contrast theme and only 330 colors are used). But when I uploaded this graphic with K2, I was dismayed to learn how it performs.


K2 takes whatever pix you upload, and creates multiple copies of differing sizes, so that you (it) can provide a size appropriate to the page and device you're (itz) serving. There is a configuration page where you can specify the canonical sizes, which it maps (like t'shirts) to Small, Medium, Large, etc.

A total of seven (7!) copies are made. The original is then discarded!

First, a copy of the same dimensions is made, using the Quality percentage defined in the K2 parameters.

Then, six more copies (XS,S,M,L,XL, and Generic) are made.

In my case, I chose 100% quality, then examined the resulting files. I was dismayed to discover a loss of quality and color vibrancy. Viewing the original side-by-side with the conversions, I could easily discern color shifts and other artifacts.

Not as important, but certainly a factor-- the multiple files created (all jpegs) were horrendously large.

I uploaded a PNG graphic, one of four in a series, captured from my desktop (so that I could write an article on Windows 8 disk write caching -- see I Want Arrays). This original graphic is 693x454 pixels, with 330 colors counted, and has a file size of 14kb. The seven copies made by K2 amounted to 746kb. The cloned copy, alone, is almost 10 times the size of the original (118kb!).

Most importantly, every picture looks worse. The jpeg clone (copied into k2's 'media/k2/items/src' directory), with the exact same dimensions, looks pale, faded, as if viewed through a fog. Checking the color count revealed a possible root cause: the original's 330 colors were morphed into 6,142 colors used.

Must be a whole 'lotta dithering going on.

All of the other manufactured images look similarly distorted.

The moral of the story could be drawn from any one of a number of our collection of platitudes. There's no such thing as a free lunch. You have to do your homework. Be careful what you wish for. I could go on, but the bottom line is, you always need to put the work in, to do the cost/benefit analysis, whenever proposing to incorporate Other People's Code into your applications.

Tags: OPC, NIH


Up and running

This is my first post on the new site. Since the server crashed (over at my previous, shared, hosting service), I've been working on independent hosting for, and here is the result. As seen in the footer of these pages, I am now running ProcessWire for the blog and public-facing pages. Adding PHP as yet-another-language, I don't feel the same comfort and expertise as I do in C & C++. Independence requires I delve deeper into the code.

Learning these many scripting languages (Perl, Python, and now PHP) over the years has been gratifying, but I get nervous the further I get from the metal.

I'm using (relying upon) yet-another-framework, here and now, with ProcessWire. Add to this the Rails framework for the Ruby-based applets, the Nodejs and Backbone Javascript (or Coffeescript, to be more precise) services, the standard reliance on jquery, and the Bootstrap CSS collection, and I can see myself moving further and further away from the hand-crafted cycle-shaving byte-counting assembly code of my distant past. It's okay. I'll get over it-- my lifelong NIH syndrome. Not Invented Here, but cool stuff nonetheless, I invite my readers to check out what I consider to be the best available CMS framework, ProcessWire.

Categories: Housekeeping