Monday, July 24, 2017

Open learning and the race to the Cloud

Last year I got a chance to work on a customer solution deployment based on a Cloud heavy Apache Stratos + kubernetes + mesos setup. It was a brash hands on introduction to the circuitry that power many a cloud solutions. At the end of that engagement, I made a mental note to pay a second visit to some of the technologies I got exposed too when time was at hand.

Not all of us Sri Lankans' have the mental abilities or the tough stomach needed to digest and perform the academic aerobics needed to secure a good free education in our country. Even fewer have the financial capabilities to fund a good education abroad. So naturally, many of us who wish to continue learning find ourselves doing so utilizing platforms such as Coursera, Udacity and Khan Academy. With these platforms you can now access the latest content curated and delivered by experts hailing from prestigious universities halfway around the world. All of these changes tip the balance in favor of those who are truly interested in a subject.

Apart from the platforms mentioned above, as an engineer who’s interested in keeping up with the technologies in my space I’ve found free trials offered by many vendors to be ridiculously useful.

From a provider’s perspective, it’s a great way to market your offering, especially if the long term success of the offering depends on the adoption and loyalty of specialized consumers. This happens to be the case with the Cloud. In the long run, the ultimate winner or winners will be decided by the level of developer traction secured. It is because of this very reason, I believe you can now get a 300$ worth of computation/storage time on the Google Cloud Platform for free, some services on AWS for free for extended periods of time and why you can try out products like Apigee Edge and RedHat OpenShift without having to spend a cent.

Back to the topic at hand, this month I thought I’d get back to where I left off last year. To get my hands a little dirty and get a test of the latest from the Cloud space. I cashed in my GCP green and took RedHat OpenShift for a spin. This post captures a few thoughts from the experience.

Google Cloud Platform

Coming from a middleware background, it’s easy to categorize GCP as an an end-to-end middleware platform in the cloud(forgive my ignorance, I’m sure it's a lot more than that). As such it offers all the components one could hope for when modernizing IT systems.


Infrastructure

Users may ascertain as much or as little control over their infrastructure as they wish. Those who are seeking IaaS level control over their systems may build up staring at the Compute Engine. If the user wants to delegate work from this layer to the platform they may build up from the Container Engine or App Engine instead.

Storage

A bucket is an atomic storage unit in the GCP context, based on accessibility and performance requirements needed for the buckets, the storage options can be grouped into three main categories, Standard; which provides the best SLA for storage that needs to be frequently and globally accessed. RDA; for those storage needs that are less taxing. Nearline/Coldline; for storage that is rarely accessed such as those needed for backup or disaster recovery. 

GCP provides a foray of storage solutions meant for application consumption and insight generation. Everything from RDMS services such as Cloud SQL to no SQL services such as Cloud Datastore. Storage for analytics is provided through BigTable which can provide high write capabilities needed for scenarios which push large number of data to be processed later on.


Utilities

Utility capabilities such as inflight data transformations, reliable messaging and identity management capabilities are provided through solutions such as Cloud dataflow, pub/sub and IAM. 

For anyone who wishes to experience the platform and its breadth, this Coursera course would be a great place to start[1].


What I liked liked about the platform and the free trial,
  • There seems to be a one to one mapping between GCP services/products, and capabilities/offerings you would expect from a middleware vendor. This makes mapping solutions to the platform easier.
  • The platform wide logging and tracing capabilities provided that just works. 
  • The utilities provided to make development work easier such as API for all the services and client side libraries to make tie in work easier, CLI(figure 1) which wraps the service API to provide a convenient means of access to developers.
  • The flexibility provided in creating the consumption architecture; for example for someone who is creating a backend for a mobile application, they may directly consume some of the GCP services(such as storage) or utilize GCP API creation capabilities to consume the services similar to the way GCP API are consumed but with better control[2][3].
  • Responsive support system, I tested the waters with a container engine query and sure enough I got a prompt appropriate response. 
  • The ability to pay for only what you use(which is a given and a key selling point for the cloud). 
  • Looks like the USD $300 given by google can carry evaluation work far.
 


Figure 1
RedHat OpenShift

After building a billion dollar global organization which has withstood over two decades of industry battering, I don’t think there will be many naysayers when I say RedHat has mastered what it takes to make an Open Source business model work. Their key proposition is value addition(be it functional or operational) to generic Open Source, a simple but effective mantra that works!

OpenShift is aimed at the on-premise managed cloud space, which is a fancy way of saying putting the some of the best of tech that make cloud services like GCP and AWS work behind a corporate network, for added control. Therefore, to gain the most of the offering it should be deployed inside an organization's data center.


First impressions and what I liked about the trial,
  • The trial allows a 14 day all pass access to the PaaS solution deployed on either GCP or Azure. If you go with GCP you get about 6 hours of computation access at a time, which is plenty for anyone who wishes to evaluate the functional capabilities.
  • The product UI(figure 2) is easy to understand, difficult to get lost in. 
  • The CLI client effectively envelopes the kubernetes and OpenShift API, making the deployment and routing setup easier to do than otherwise. 
  • Access to docker hub from within the trial setup. 
  • The concise lab documentation that takes you through the key functionality. 
  • The product seems to support the leading conventions and technologies in the space with their own value added spin-offs or vanilla components when they are good as is.



Figure 2






Sunday, July 2, 2017

The Everything Store: from Books to the Cloud

For a while, I was curious to as to how a company known and reputed for selling books become the innovator and a leader of cloud technology/services. The two domains seem unrelated, and the fact that the same founder is behind both ventures seemed coincidental. 

"The Everything Store" by Brad Stone is definitely one of the better biographical nonfiction works I’ve read, I still have a few more chapters more to go but it has already answered the question I wanted it to answer when I decided to purchase it. 

When you get started on the book it becomes clear that Bezos always intended Amazon to become a technology company, he wasn’t really sure how to get there but that was the end goal. 

Time and time again Bezos decides to capitalize on opportunities that common sense/wisdom would say have a small chance of paying off so as such they should be passed on for other opportunities that have a greater chance of paying off big. He approaches some of these opportunities at times like a scientist. 

His decision to sell books on Amazon was fueled by his realization that books were an item that was the same regardless of where you decide to buy it from, this quality makes consumers more likely to brave the untested waters of online shopping, given the price is right! He realizes that books in America were controlled by a handful of publishers and that they are relatively easy to ship in good condition. For someone who has not taken the time into looking to the Amazon story, it might look like from the point Amazon becomes a hit with the consumers it’s all smooth sailing and ventures that pay off big, but that’s not the image “the everything store” paints. 

Before venturing further on how Amazon transitioned from selling books to selling server resources, we need to talk about luck! In business and in personal life the importance of the role luck plays in deciding the outcomes of our actions is often understated. Unlike system design in engineering, in the real world the scope the systems we are part of cannot be defined with absolute certainty, there are just too many known and unknown variables at play. For the sake of argument and the continuation of this review, let’s define luck as all the factors that are beyond the control of the individual that has a moderating or causal relationship with the outcome the individual desires. 

Luck, it so happened was favorable for Amazon and Bezos when they first started out but this was not always the case. What I found out when I went about reading the book were the countless endeavors that Bezos and his team make that have little to no success. The book tells the story of a company that strives to better its core business, book retail while striving to attain the Bezos vision of transforming into a technology company. Work done by Jeff Wilkes on the Amazon’s fulfillment centers shows Amazon’s and Bezos commitment to their core business. 

Parallel to the efforts of “making what works better” Bezos and Amazon shows a relentless desire to venture out into the technological domain, book preview, A9 search and internal system modernization are examples of this desire. Though some of these endeavors had little success it becomes apparent that Bezos has not given up on the vision and when O’reilly proposes to expose Amazon sales data as API for the benefit of the community, it appears Bezos becomes aware of an another stakeholder of technology companies such as Amazon, the developers. This seems to spark the interest in Bezos to provide services to outside developers with the infrastructure Amazon has worked tirelessly to be one of the best in the industry. This decision would have also been influenced by their previous successful experiences of providing warehousing services to external sellers with the superior FC’s Wilkes had built. 

By the early 2000’s the foundation was finally in place for Amazon’s foray into the cloud. Bezos puts a resourceful executive Anday Jassy in charge of his latest pet project Amazon Web Services(AWS). Just like with his decision to start Amazon, luck was favorable. The competition, Google and Microsoft had their attention on the shining new opportunity Steve Jobs and Apple had uncovered with the iPhone. For the competition, on one hand was the proven success and sizeable profits of smart phones and on the other a barely break even, untested opportunity developer services offered. By mid 2000’s Amazon rolls out EC2 and S3 giving it close to half a decade's head start over the competition.

In my opinion the Amazon story tell you that, not all good decisions pay off but if you keep repeatedly making good rational decisions, you milk what works for all it’s worth and you have some luck on your side you are bound to do well. What makes Bezos a great leader to me is his ability to keep identifying good opportunities, be it in business ventures or hiring great people. Time will tell if he will be remembered as one of the greatest.

What's in my Bag? EDC of a Tester