Skip to main content

Blog

A Kernel? In a Container?!

A Kernel? In a Container?!

Recently I was blessed with the opportunity to attend KubeCon2017 in Austin, TX, where I had opportunity to get the inside scoop on Kubernetes, OpenShift (❤❤❤). Heptio, and a few other projects that I've come to enjoy since starting my career at Linux Academy. By a stroke of sheer luck, I happened to be present when the OpenStack Foundation announced a totally new project combining elements from Hyper RunV and Intel Clear Containers, named Kata Containers. A basic diagram of Kata Containers infrastructure - from katacontainers.io According to their website, Kata Containers offers "...the speed of containers, [with] the security of VMs" by including a light-weight custom kernel in each container.

Read Post
Getting to Know OpenShift with Minishift

Getting to Know OpenShift with Minishift

What is OpenShift?  According to Wikipedia, OpenShift is a "supported distribution of Kubernetes using Docker containers and DevOps tools for accelerated application development," which is a good start to describing the OpenShift ecosystem — but like most software, there are several different products under the OpenShift umbrella. Let's go over a few varieties. Built around a core of Docker container packaging and Kubernetes container cluster management, OpenShift Origin is the open source upstream community project used in OpenShift Online, OpenShift Dedicated, and OpenShift Container Platform, freely available under the Apache v2 license on GitHub. OpenShift Online is Red Hat's public cloud application development and hosting service, supporting a variety of languages, frameworks, and databases via pre-build "cartridges" running under resource-quota "gears." Those interested in a private, enterprise supported version of OpenShift can opt either for OpenShift Dedicated, Red Hat's managed private cluster offering available in AWS or GCP, or OpenShift Container Platform (formerly known as OpenShift Enterprise) Red Hat's on-premises private PaaS product. Both OpenShift Dedicated and OpenShift Container Platform are built on a core of Kubernetes and Docker application containers created on a foundation of Red Hat Enterprise Linux. Announced during a keynote at Red Hat Summit 2017 in Boston, MA, OpenShift.io is an open online development environment for planning, creating, and deploying hybrid cloud services. Minishift, which we will be using for the below dev deployment, is a fully functioning OpenShift Origin instance with an integrated Docker registry, running locally on your machine.

Read Post
What Is Kubernetes?

What Is Kubernetes?

If you’re working in technology, or hope to someday, you have most likely heard the word Kubernetes or seen k8s; even if it was only in passing. To say it’s the hottest thing out there would hardly be an understatement. So what is Kubernetes? According to kubernetes.io, the definition of Kubernetes is as follows: “Kubernetes is an open-source platform designed to automate deploying, scaling, and operating application containers.” And the nickname, k8s, is the leading K in Kubernete and the number of letters in the rest of the word, not including the s, as that is present after the 8. The name is derived from the Greek word for pilot or helmsman as that is what its purpose is; it pilots the cluster to deploy your applications and scale on the fly among its other features. Google created Kubernetes and released it to Open Source in 2014; it grew out of the Borg project that Google used to manage their own large-scale container architecture. With the popularity of Docker, it made sense for Google to release Kubernetes to help users implement their applications with microservices and to encourage more users to use the Google Compute Engine and Google App Engine. Kubernetes, however, has a wider reach and is now used on multiple clouds and is one of the largest Open Source communities on GitHub.

Read Post
Kubernetes 1.9 Release Enhances Support for Cloud Native Infrastructure (and Windows!)

Kubernetes 1.9 Release Enhances Support for Cloud Native Infrastructure (and Windows!)

Kubernetes latest upcoming release, 1.9, has a release scheduled for December 13, 2017, and contains a number of improvements and expanded functionalities. Changes to the Workloads API aim to address stability concerns as it gets added to General Availability, the Kubernetes Windows beta moves forward, and the Storage Special Interest Group (SIG-Storage) works with the Container Storage Interface (CSI) Community to create a single interface for Kubernetes storage solutions all in this update.

Read Post
Multiple LXD Hosts Can Share a Discreet Layer-2 Container-Only Network

Multiple LXD Hosts Can Share a Discreet Layer-2 Container-Only Network

Interested in LXD and LXC?  Check out our new LXC/LXD Deep Dive course here at Linux Academy!  We cover topics including installation, launching containers, persistent storage, networking, and even cover some fascinating use cases to make LXC useful and relevant to you right now! We've also created this LXC-LXD Cheat Sheet to help you get started with LXD right away! Introduction LXD is a really fun and easy way to jump into containers, especially if you have some experience with virtual machines.  LXD is designed to create machine containers, which strongly resemble virtual machines, so trying out new distributions or testing application deployments is easy and – dare I say it – fun.  LXD 2.0 brought myriad new features to the platform, but a few tasks remain adorably unfledged. A single node of LXD, for instance, can easily be initialized and containers come up on their own private network with the default settings.  Once a second node is added, a major limitation becomes obvious:  Each node has its own private network for containers and, without some networking jujitsu, will never be able to communicate with one another. Getting all those containers on the same layer 2 network, regardless of which host each one resides on, is what this post is all about.

Read Post
Docker Introduction

Docker Introduction

Everyone is talking about Docker, but you already have a day job and spend your nights learning three other new technologies. Plus the kids need a story before bed, the toilet won’t flush right, the trash needs gathering up and – who has the time?! Don’t worry, we’ve got you covered! I can’t help with the plumbing, but Linux Academy can give you a head start with a Docker introduction and maybe even save you a few minutes by using it.  Docker has been touted as a “lighter visor,” and I’m still not sure how I feel about that. On the one hand, it’s correct: Docker containers are, in many ways, similar to virtual machines in that they are discrete units of compute designed around an application or set of applications making up a task. Besides being an order of magnitude more efficient by making use of the host’s kernel, containers are smaller, faster, and generally better for many tasks than virtual machines.

Read Post
Using Docker-Compose to Simplify Multi-Container Builds

Using Docker-Compose to Simplify Multi-Container Builds

One of the primary advantages of containers is the ability to quickly deploy a single, discrete application with a known configuration, fully supported within the container. However, as we all know, within our enterprise organizations, most applications do not run on their own – most have to connect to and use search services, databases, and more. Let's take a look at how to build a multi-container environment, managed with a single utility in order to simplify your deployments with Docker-Compose.

Read Post
Google Container Engine

Google Container Engine

So here we are; we launched a new course at Linux Academy called Google Container Engine! If you've worked with AWS before, then you know that containers can function very similarly to EC2 instances; however, this is not the case with Google's cloud. Instead, you get a stack of servers called a container cluster which are cloud instances that run Kubernetes. The course covers these various features:An introduction to the Google Container Engine that explains various features for container clusters such as autoscaling, hosting a multizone configuration, and how to use Kubernetes alpha features. An overview of the Google Cloud Platform Console. A look at managing container clusters through the CLI and Console and how to use container cluster features. How to use the Google Container Registry as an alternative to using Docker Hub for creating your own private registry. Learn how to use Kubernetes to create a pod and service using the Hexo blog. Create a microservice with persistent disks using MySQL and Ghost, another blogging platform. How to use the Kubernetes Dashboard with your container cluster.

Read Post