C++ Newbie Tour: How to Use CMake with CLion, GoogleTest and Your Code

C++ Newbie Tour

First things first, let's start with terminology. You see, I started learning C++ somewhat recently, which may be puzzling if you know me — I've been building my career in programming for over twenty five years. Well, despite programming professionally in C, Perl, BASH, Java, and Ruby, I have somehow skipped C++. But then, when I decided to play with hardware like Arduino, of course I wanted to take advantage of Object Oriented techniques and design patterns that I've acquired over the years to my Arduino library code! Ha! By that time I was very surprised to find most existing Arduino projects and libraries written, badly, in C. There are some exceptions, like the RF24 library, but majority of the libs are not even written by professional programmers, and can be a bit, ... hairy.

But, it turns out that you can have your cake and eat it too.

You can absolutely build an Arduino library in C++, as long is does not need to link with large static libraries (or, if they do — you have plenty of flash on your chip — thanks Teensy!). And that's how I got into learning how to structure my C code in a C++ way. Cause that's what's C++ programming is all about, no?

So with this I kick off an official "C++ Newbie Tour" set of blog posts, with which I hope to share some of the important things I've been learning as I am going through this process, and in particular figure out something that those of us who've been using Rails for too many goddamn years :) are used to having nicely laid out projects, with clearly named folders for where things should go, and a magical dependency loader.

What We'll Need...

While I am on a Mac with Mac OS-X Sierra, and Xcode Developer tools, I much prefer to use JetBrains IDEs for programming in almost any language. I tend to resort to Atom for when I want to peek at a project, and open it quickly, but for actual development I am a big IDE fan. Using RubyMine I can out-code (in ruby) and out-refactor almost any VI user out there :) Feel free to challenge me!

So the components I will be using in my C++ learning quest are:

  • JetBrains CLion IDE is the IDE I will use for writing C++ code.
  • GoogleTest C++ Unit Test Library is a fantastic library we'll rely on
  • Because Clion supports only two build systems, we will use one of them — CMake. CMake is meant to be a much simpler Makefile generator, and is clearly gaining traction in the community.
  • We'll also use gcc compiler, of which I have two versions installed: one comes from HomeBrew, and one comes built in by Apple.

In my BASH init files, I set /usr/local/bin to be placed before the standard system paths such as /usr/bin, bin, etc. Since Apple does not allow /usr/bin to be writeable, that's the only option when you want to override the older system binary.

 # given that my PATH is "/usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/usr/sbin:/sbin" etc.
 ❯ gcc --version
gcc (Homebrew GCC 6.3.0_1) 6.3.0
Copyright (C) 2016 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

That was the HomeBrew version.

And here is Apple's:

❯ /usr/bin/gcc --version
Configured with: --prefix=/Applications/Xcode.app/Contents/Developer/usr --with-gxx-include-dir=/usr/include/c++/4.2.1
Apple LLVM version 8.1.0 (clang-802.0.42)
Target: x86_64-apple-darwin16.5.0
Thread model: posix
InstalledDir: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin

Now, if you do not have the Brew GCC installed, you should probably rectify this situation as quickly as possible. You see, the built-in compiler, on a 2017 machine with the latest OS-X, still does not appear to full C++11 and C++14 feature support.

How do I know this? Let's find out.

C++ vs C++11 vs C++14

Many things have changed in C++ since it was, just C++. So it's kind of important to know what your compiler supports before using a feature that will require rewrite if you are stuck on the older compiler.

Our test file will be called c++ver.cpp, and it's contents will look like this below. It simply uses a macro __cplusplus to determine the version, and prints it out:

#include <iostream>
int main(){
    #if __cplusplus == 201402L
        std::cout << "C++14" << std::endl;
    #elif __cplusplus==201103L
        std::cout << "C++11" << std::endl;
    #else
        std::cout << "C++" << std::endl;
    #endif
    return 0;
}

Now, let's compile it and run it using both:

# First, let's use the default Apple's compiler installed with Dev Tools:
$ /usr/bin/g++ c++ver.cpp -o default-c++compiler
$ ./default-c++compiler
C++

# Now, let's use gcc-6 compiler installed with Brew.
$ /usr/local/bin/g++-6 c++ver.cpp -o gcc6-c++compiler
$ ./gcc6-c++compiler
C++14

OK, so we know know what each supports. But, what about the size of the binary generated?

$ ls -al *c++*
-rwxr-x---  1 kig  staff  15788 May 12 17:59 default-c++compiler
-rwxr-x---  1 kig  staff   9180 May 12 17:59 gcc6-c++compiler

The newer compiler produced a binary of half the size!

And what if we add -O3 to optimize it?

$ ls -al *c++*
-rwxr-x---  1 kig  staff  10676 May 12 18:13 default-c++compiler
-rwxr-x---  1 kig  staff   9056 May 12 18:13 gcc6-c++compiler

Huh, so the build-in compiler got squashed quite a bit! While gcc6 pretty much stayed at nearly the same tiny byte size.

As a fun experiment, if we replace std::cout with printf, and instead of importing <iostream> — a C++ library, we could import a C library <stdio>?

The code now looks like this:

#include <stdio.h>
int main(){
#if __cplusplus==201402L
    printf("C++14\n");
#elif __cplusplus==201103L
    printf("C++11\n");
#else
    printf("C++\n");
#endif
    return 0;
}

Compiles the same way, and hey!

-rwxr-x---  1 kig  staff  8432 May 12 18:17 default-c++compiler
-rwxr-x---  1 kig  staff  8432 May 12 18:17 gcc6-c++compiler

The files are IDENTICAL sized (but not actually identical, I checked).

Anyway, now that we had this brief detour about C++ versions and how to use a compiler, we are going to take a look at what type of things we may be creating with our C++ project?

Targets

pr

So targets are what you actually wanna build with your code. It can be one of three things:

  1. an executable binary
  2. a static library
  3. a shared library
↳ Keep reading …

PostgreSQL. Love affair.

mockaroo.com – random data generator

My History with PostgreSQL

PostgreSQL is Innovating

Amazing new Features

Array Data Type

Range Data Type

Geometry / 2D Space

XML/XPath

JSON

select data->>'first_name' from friend where data->>'last_name' = 'Boo';
select (data->>'ip_addresss')::inet <<= '127.0.0.0/8'::cidr

JSONB

Values are native javacript, data types, text number boolea null, subobject

create table friend2 (id serial, data jsonb);

insert into rined2 select * fro fri3nd; -- jsonb_path_ops indexes are smaller and faster`\ create index friened2_idx on frind2 using GIN (data)

ROw types

create type drivers_licsense as
(state char(2), id integer, valid_unti date);

create table truck_driver
(id SERIAL, name TEXT, license DRIVERS_LICENSE);

INSERT INTO truck_driver
VALUES (DEFAULT, 'Jimbo Baggings', ('PA', 12314', '2017-03-12'));
↳ Keep reading …

Essentials of Wireless / RF design and Manufacturing

Essentials of Wireless / RF design and Manufacturing

↳ Keep reading …

Making a Circuit Board with Eagle

Making a Circuit Board with Eagle

↳ Keep reading …

Achieve Service Discovery, High Availability, and Fault Tolerance – This Afternoon

Achieve Service Discovery, High Availability, and Fault Tolerance – This Afternoon

Hi. With this post, I'd like to start a series of DevOps related conversations covering topics that are not only on my mind, but are also being asked by people I meet, running applications of all sizes across multiple clouds, all different technologies.

Down Under, But Above the Ground

I started building distributed applications in 1996, when I was placed as a junior contractor in the operations group managing a very large scale project. The goal was to rebuild the entire messaging and cargo-tracking software for the Australian Railway corporation named aptly National Rail Corporation. Back then, we used this fascinating commercial software called Tuxedo as a middleware, transation manager, a queueing system, and so much more. Tuxedo has a fascinating history – which could be a subject of an entirely separate post, but suffice it to say that it was developed at AT&T in the late 1970s, in a practically the same lab as UNIX itself! At some point in the 80s, it was then sold to Novell, then to BEA – which eventually used it to build it's flagship product WebLogic that now belongs to Oracle. I was told that there was a time where British Airways used Tuxedo to network over 20,000 stations together.

This is wnere I learned how to build distributed applications – with tenets such as high availability, fault tolerance, load balancing, transactional queueing, distributed transactions spanning several databases, plus several queues. Oh my gosh, if it sounds complicated — it' because it was! But once you get the main concepts behind how Tuxedo was constructed: each node had a server process that communicated with other nodes, and managed a local pool of services – it sounds very very reasonable. I stayed on this project for about a year and a half, long enough that all of the more senior people who understood the messaging component had left, leaving me – 23 year old junior programmer, the only person understanding the architecture and operational aspect of that system. That part of the waterfall was not predicted by their project managers ☺.

Now, close your eyes and imagine being flown on a corporate helicopter across the gorgeous city of Sydney, just to attend the meeting with people five levels senior. That was fun indeed.

Back in the USA

I took my knowledge of distributed systems to the US, where I was hired to build Topica, a hot new startup in San Francisco building a new generation of listservs, or email groups (aka. mailing lists). The year was 1998 and there sure wasn't any sort of open source distributed networking software that gave us the incredible reliability and versatility that the middleware like Tuxedo provided, so we boght a license. Whether this was the right choice for the company as at time is not for me to judge, but the system we built using Tuxedo (ANSI C with Tuxedo SDK), Perl (with native bindings for C), and horizontally distributed Oracle databases, ended up being so damn scalable that at some point in time we were noted as a source of more than 1% of all daily internet at that time! And sure enough, we were sending several hundred milliion messages per day.

And here is a kicker: if you open https://app.topica.com/ you will see the login screen from the app we built – it's functionality is most similar to that of Constant Contact. The Topica app has been running untouched, seemingly unmodified, since 2004! – twelve years! They stopped developing the app shortly after I left in 2004, mostly for business reasons. But the software endured. And it's still running, 12 years later. It was built to be reliable. It was scalable. It was transactional. What it wasn't – is simple.

This second experience with Tuxedo forever changed the way I approach distributed application development.

New Startup!

This part is hypothetical (and a bit sarcastic, for which, I hope, you can forgive me).

Say we are building a brand new web application that will do this and that * in a very *particularly special way, and the investors are flocking, giving us money, and so we get funded. W00T!

If I am an early engineer, or even a CTO, on this new project, I would not be doing my job if I am not asking the founders a lot of questions that have great affect on how we are going to build the software supporting the business. And how soon.

So I pull the founder into a quiet room, hide their cell phone from them, and unload onto them with questions for an hour.

The best hour spent in the history of this startup. Promise.

Six Questions Every Technology Entreprenuer Should be Able to Answer.

  1. How reliable should this application be? What is the cost of one hour of downtime? What's the cost of one hour downtime now, six months from now, a year from now? What is the cost of many small downtimes? What about nightly maintenance?
  2. How likely is it that we'll get a spike of traffic that will be very important or even critical for the app to withstand? Perhaps we were mentioned on TV. Or someone twitted about us. How truly bad for our business will it be, if the app goes down during this type of event because it just can't handle the traffic? And even if the spike of death happens, how important it is that the team is able to scale the service right up with traffic within a reasonable amount of time? What is a reasonable amount of time?
  3. How important is it that the application interactions are fast? That users don't have to wait three seconds for each page to load? How important is it that the application is not just "good" (say, 300ms average server latency), but amazing (say, 50ms average server latency)?
  4. How important it the core application data is to the survival of the business? For example, a financial startup that deals with people's money, data integrity is paramount. For a social network that's merely collecting bookmarks, it's only vaguely important. Large data losses are never fun, but a social network might recover, while a financial service will not.
  5. How important it is that the application is secure? This question should be viewed from the point of view of being hacked into – once you are hacked, can you recover? If the answer is "no", you better not get hacked. Right?
  6. The last bucket will deal with the engineering effort. Things like **cost, productivity, ability to release often, hire and grow the team easily, **etc. Whats the cost of maintenance, how bit is the Ops team, how big and how senior must be the develoment team?

Oh, I hear you say the word: catastrophic.

Now, how bad is it for your business, if, say, you are hosted on AWS, and a greedy hacker takes over your account and demands ransom? Well, if you did not think about the implications of building a 'mono-cloud' service, and even your "offsite" backups are within your one and only AWS account, then the answer is – once again – catastrophic. Your business is finished.

But then, in between "oh, it hurts, but it's ok" and "we are finished" there lies a whole other category of: "our users are pissed", "we lost 20% MOU", "everyone is switching to another social network", "did you hear so and so got broken into and got their user data stolen? They've asked for my social security number, and I am furious!..."

This may not be The Catastrophy just yet, but your technology is either not scaling, not reliable, or not secure. The Catastrophy may be right around the corner.

Given that I've been building almost exclusively applications that most certainly did not want to die because of scalability, reliability or security concerns, I've applied the same patterns over and over again, and results speak for themselves. I don't like bragging, and I wouldn't say this – but for those of you still skeptical – I refer you to the uptime and scalability numbers mentioned in this presentation.

Which brings me to the conclusion of this blog post.

Six Tenets of Modern Apps

The topics and scenarious above,distill down to the following tenets the apply to the vast majority of applications built today.

As a simple excersize, feel free to write down – for your company, or an application – how important, on a scale from 0 (not important), to 10 (critical/catastrophic if happens), are the following:

  1. High Availability. Solutions to this are comprised of fault tolerance, multi-datacenter architecture, offsite backups, redundancy at every level, replicas, hosting/cloud vendor-independence, monitoring and a team on call.
  2. Scalability. Scalability is the ability to handle huge concurrent load, perhaps hundreds of thousands of actively logged in users intearacting with the system, that might spike to (say) 1M or more. It is also the ability to dynamically raise and lower application resources to match the demand and save on hosting.
  3. Performance. What's the average application latency (the time it takes for the application to respond to a single user request – like a page load)? What's the 99% and 95% percentile? This is all application performance. Good performance helps scalability tremendously, but does not warrant scalability in of itself. Well performing applications simply need a lot less resources to scale, and are both pleasure to use by your customers, and cheap to scale. So performance really does matter.
  4. Data Integrity. This is about not loosing your data. Accidentally. Or maliciously. Usually some data can be OK to loose. While other data is the lifeblood of your business. What if a trustworthy employee, thinking they are connected to a development database, accidentally drops a critical table, and only then realizes that they did that on production? Can you recover from this user error?
  5. Security. This one is a no brainer. The bigger the payoff for the hackers (or disgruntled empoyees) the more you want to focus on securing your digital assets, inventions, etc. Not only preventing them from being copied and stolen, but from erased completely. Always have last day's backup of your database securily downloaded somewhere into an undisclosed location and encrypted with a passphrase.
  6. Application runtime cost, Development Cost and Productivity, engineering and devops teams, rapid release cycle, team size, etc. This is such a huge subject, that I will leave it alone for the time being.

In the next blog post, I will discuss specific solutions to: * High Availability * Fault tolerance * Redundancy * Recovery * Replication * Scalability * How to scale transparently to more traffic * And scale down as needed * Service Discovery * How does the app know where is everyone? * Monitoring and Alerting * How to put your entire dev team on call * How to alerts on what's important * How to do this all at a fraction of a cost that it used to be just a few years ago... * How to stay vendor independent and why would you want to.

Thanks for reading!

Microservices

aybe you are running one or more distributed multi-part, a.k.a. micro-services applications in production. Good for you!

Perhaps you are struggling with high availability, such as tolerating hardware outages, maybe when your cloud provider is rebooting servers, etc.

Perhaps you may be seeing an error known as "too many clients", or "max connections reached", or "what, you think you really need another one!!??" – coming from one or more of your services (or your coffee shop barista)...

Or maybe, within your micro-services architecture, you are struggling with service discoverability, ie. how does your app know the IPs of your:

  • search cluster
  • your backend service(s)
  • your redis cluster
  • your databases with their replicas
  • your memcaches
  • your message bus
  • cat feeder
↳ Keep reading …

Docking with the Container Conf on Autopilot

I am writing this from the ContainerCamp – a single day conference in San Francisco. Its happening inside of the gorgeous Bloomingdale shopping mall. Who knew that inside of the shopping mall is a shwanky co-working space? I didn't.

Anyway, I am here as a bit of an outsider, at least the way I see myself. I have been hearing about Docker, playing with it a bit, scratching my head quite a bit, and so I am certainly looking forward to some clarity today. I do strongly believe that if something is a challenge for me to fully "get", it's going to be a challenge also for other people, perhaps those who think like me.

Blood Behind

I come to this conference with over twenty years of commercial software experience. My early foreay into software started with Linux version 0. I was the guy with 80 floppy disks at the university computer center, installing it on one of the firmly attached computers in the lab. Of course I didn't have a permission, what sort of question is that?!

From my the first true software job that I landed in Melbourne circa 1995, I was deeply involved with Operations. I was hired to be a junior helper to the truly phenomenal Ops Group in charge of running everything, requred for the multi-million, multi-year project to rewrite ancient management software for one of the largest private rail companies in Australia. From cargo tracking with live sensors, to routing of the (iron) containers, to ticketing, this was a major undertanding, and one that (predictably) got delayed by over a year.

It was during that time that I got to appreciate the complexities and intoxicating power of making large distributed systems run continuously, withstanding hardware outages, network outages, train outages, human errors, and in general – tolerate anything at all that could possibly failed. I believe, that this was the experience that would make a lasting imprint on the way I think of building and running distributed software.

Ops Team and Dev vs Ops (Topica)

DevOps Team, and Dev without Ops (Wanelo)

Chef

Joyent Zones

Containers

Conference Quotes::

"Containers are not a panacaea!" "Containers are not virtualization" –– RedHeat

Container Koolaid

Quote from http://kubernetes.io/docs/whatisk8s/:

The Old Way to deploy applications was to install the applications on a host using the operating system package manager. This had the disadvantage of entangling the applications’ executables, configuration, libraries, and lifecycles with each other and with the host OS. One could build immutable virtual-machine images in order to achieve predictable rollouts and rollbacks, but VMs are heavyweight and non-portable.

And then, of course,

The New Way is to deploy containers based on operating-system-level virtualization rather than hardware virtualization. These containers are isolated from each other and from the host: they have their own filesystems, they can’t see each others’ processes, and their computational resource usage can be bounded. They are easier to build than VMs, and because they are decoupled from the underlying infrastructure and from the host filesystem, they are portable across clouds and OS distributions.

↳ Keep reading …

How to Bash Your Terminal, and Bash-It Good...

Those of us who work on building software inevitably spend a portion of our time typing various commands on the command line.

And, unlike twenty years ago, when our terminal was a non-graphical ASCII terminal, today's terminals are very feature rich and able to express things like millions of colors, fantastic new fonts, not to mention community shortcuts and goodies that are packaged in convenient libraries such as Bash-It, or Oh My ZSH.

Some History

A very long time ago, some day back in 1999, I switched to the Mac OS-X as soon as it was released for one and only one main reason: the glorious Free-BSD-like command line. Believe it or not, but until that, our choices of operating systems were pretty much limited to Microsoft Windows, with it's horrendous MSDOS command prompt mode, MacOS9, which did not have a command prompt AFAIK, and of course the early versions of Linux, which at the time often required installation from about 80 floppy disks.

I was a huge fan of the UNIX command line, and it's ability to pipe commands to each other, and construct complex and very powerful means of processing text, filtering, replacing or counting anything that vaguely resembled a parseable text file.

But until then, we had to deal with something like this:

  • quick intro into the project
  • why run bash-it?
  • powerline fonts and prompts
  • reinvent1 prompt
  • extending bash-it

↳ Keep reading …

The incredible power of Fork

The incredible power of a fork.

I have always filled my spare time with various projects. A good amount of these were software projects.

I started coding sometime in the late 80s / early 90s, and by 1995, I was working in software professionally. Projects back then involved creating one of the first community websites on the planet (RusCom.org.au – which was the directory for the "Russian-speaking community of Melbourne", in Australia). I also remember installing a pre-1.0 version of Linux using something like 40 floppy disks. Insanity!

Most recently, and after a few years of running the engineering organisation at Wanelo, I went freelance, which created an opening for more projects of my liking. And so I started writing little tools, and little helper projects for myself and others.

In just a few months I've mostly completed "Pullulant" — the framework for installing developer tools on OSX, "WarpDir" — a little command line tool for bookmarking directories, and about to start "Supernova" — a Chef-based cloud template for web applications. This is so fun! I am really enjoying being back in the coding game.

What makes coding today so incredibly pleasant, is the

↳ Keep reading …

DevOps Guide to Docker: Why, How and Wow.


Preface

We are quickly approaching 2016, only a few days away. Docker has been the new hotness for several years, and our cloud hosting company Joyent had been ahead of the game: they had been running light-weight containers since nearly the company's inception sometime in 2006. This is because Joyent's host operating system (i.e. their software 'hypervisor') has always been SmartOS, which descended from Sun Microsystem's Solaris OS. SmartOS today is open source, and is a very modern OS. But unfortunately it is not nearly as popular as Linux platforms. As a developer I've appreciated many features that SmartOS offered. This is a topic for a whole another post, but if you are curious, here is a good (but slightly outdated) overview..

But what's important to know about Joyent is that when SmartOS was used as a hypervisor, to run SmartOS virtual instances, these instances could be resized dynamically, shrunk or grown in terms of RAM, CPU or disk IO – all without needing to reboot! They also reboot in a just couple of seconds. When this flexibility was combined with a complete Chef automation, we (at Wanelo.com) were quite happy with our DevOps situation.

In fact, I presented on this very subject at RubyConf Australia early in 2015. Focusing on how Wanelo managed to run large infrastructure serving millions of users without hiring operations people. I also shared several high availability patterns. The talk is called "DevOps without the Ops".

At the time I was writing this, the top largest leading Cloud providers support Docker container-based virtual machines, as does Joyent. Believe it or not, I haven't had a chance to play with it yet, and so I decided to do a small project with Docker and AWS, and document my "a-ha!" moments about Docker and AWS, hoping that this guide might help someone else fast-track into Docker quicker.

Given my larger than your average developer experience with DevOps and Operations in general, I thought that my journey will likely be similar for many others with my background. And that happens to be a fantastic motivation for a blog post.

Audience

I am going to make an effort to write in a pretty inclusive style, where you do not have to be a rockstar Rails programmer to understand what's going on. But you should be familiar with UNIX OSes, with deploying web applications, with setting up the "stuff" that makes your web application work (such as nginx/apache, database, etc). You may have dabbled in AWS, Docker or Chef, and they left you confused. This is what I am looking for. If you are confused – you've come to the right place, because at the beginning of the blog I am pretty confused too! This is why I m writing this :)

So let's get started!

Project

This blog runs on Jekyll blogging generator. I call it generator because it simply generates static pages that are then served by a web server like nginx. As far as I know you can not add any dynamic functionality to Jekyll server-side (only client-side, with AJAX and JavaScript APIs).

We will migrate this simple blog from Github Pages to AWS/Docker, while discussing along the way, showing the code, and comparing our impressions :) If you are reading this blog on http://kig.re/ domain, then it's already been migrated.

The second phase, after we get it to run locally – is to deploy it to production. I would like to configure two EC2 instances, one in each of the two availability zones we'll pick, and put them behind an elastic load balancer, for maximum high availability. I will constrain myself to not add any more complexities, but typically fail at that 😜

Goals

As a relative newcomer to the "Docker Scene", I am wearing hipster pants and twttr t-shirt. No, not really. But I am curious about this technology, and how it changes my "toolbox": the list of tried and true solutions to common problems that I have accumulated over two decades of commercial software development.

Most importantly, I want to first understand Docker, it's architecture, and, specifically, it's innovation. What are the key pieces that makes Docker so popular? I would like to prepare, in a repeatable way, my laptop (or any future Mac OS-X computer) with the tools necessary to run Docker containers locally, on Mac OS-X El Capitan. What's the relationship between Chef and Docker. As a DevOps engineer, do I need them both? Or does Docker alone provide sufficient replacement for automating application development on both development, staging and production environments?

So there, above, are my personal goals for this project.

But what are the big challenges in front of DevOps engineers today? Well, distributed systems with often conflicting demands can be very very difficult to manage, while satisfying orthogonal requirements. So simplifying anything in this area is a big win, especially for the businesses running software in the cloud.

To really break this down, and fast, let's pretend that we are now in the Land Of "Ops", and we are managing a large production infrastructure.

Each business is unique in that it prioritizes the below list of requirements in a unique way. For a financial tool, security may be paramount, but for a site that does not record any personal information – it might not. As a result, depending on the business we always have to juggle requirements and expectation across several orthogonal axis (and by othogonal I mean that they are mostly independent of each other – one cary vary drastically while the other is fixed, etc.)

From my experience, the top requirements that Engineering as a discipline at any organization must satisfy, is a combination of the following (in no particular order):

  1. Ability to easily change, deploy, and understand the software.
  2. The cost (and risk) of breaking production with a deployed bug.
  3. The cost (and risk) of an unplanned downtime, and cost of being down per minute.
  4. Ability for the system to automatically scale up and down with the incoming traffic.
  5. Predictability of the operational cost, for example $x per month, per 1K MAU.
  6. Security posture, risk factor, probability of a compromise. After-affects of a break-in.

Often these go at odds with each other: for example, increasing security posture means slowing down development, and increasing operational cost. And so on.

I really like to distill things down as much as possible, so when I do that with the list above, here is what I get:

Today's enterprise software development requires a carefully measured balance of effort, spent on the abilities to grow, change and scale software and the service on one hand, and risks of a tumble, crash or a data leak on the other.

So with that, let's move along to see what Docker gets us, but before – I want to better understand the concept and the history of virtualization.

Computing "Inception"

Virtualization is a concept that should be relatively easy to understand. For many years now it has been possible to run one operating system inside another.

With Mac OS-X growing popularity around early 2000s, this concept reached the "consumer". It was no longer a question of large enterprises, but developers, designers, and gamers. People wanted to run OS-X application side by side with Windows. And in the case a Mac, that was not even possible until Apple switched to the Intel processor family.

2005 – Parallels

Most popular, and probably the very first one was Parallels, Inc.. It allowed Mac OS-X users to run Windows, next to their Mac Apps, and without having to reboot their computer.

Parallels created a software that emulated the hardware – a layer that pretends to be a collection of a familiar BIOS, motherboard, RAM, CPU, and peripherals, USB, SATA disks, etc. I write software for living, and to me it feels like a pretty difficult task to accomplish, and Parallels definitely were the first to market, releasing their first version in 2005.

While Parallels did technically work, for many resource-intensive application running inside Parallels was simply not realistic. It was just too damn slow. Consumers quickly discovered that running games inside Parallels was pretty much a moot point.

2006 - BootCamp

Steve Jobs must have liked what Parallels did, and decided to do it better, as he typically does. Well, maybe not better, but differently. BootCamp allowed Mac users to run Windows at near native speed of their Mac, but the catch was that only one or the other was possible. You could run other operating systems too, just like with Parallels.

↳ Keep reading …