In our last article, we took a look at some of the most common compliance regulations that affect IT organizations and their cloud stratagem. Understanding the requirements around compliance can help to inform your cloud infrastructure plans, but is only part of the equation. Today, I want to talk about how the recently announced (AWS re:Invent 2016) AWS Shield can help organizations put some of the complex issues into perspective in a more comprehensive policy.
To say that RE:Invent was a shocking revelation for many 3rd party vendors/integrators/developers would be an understatement. However, this has become the norm when it comes to Amazon’s Cloud offering, AWS. Holding an annual convention for the past 5 years, they have redefined what it means to add features and react to an ever-changing and challenging market during this age of IT innovation and disruptive tech. As CEO of AWS, Andy Jassy, said during the first keynote, “We are iterating at a faster clip than anybody.” From 280 significant features in 2008, to now over 1000 significant features, yielding an average of just around 3 new features a day, AWS truly has set a new standard in what it means to speak to your customers needs. In line with the tradition at the annual RE:Invent, AWS announced a slew of new features and services to speak to their customer’s needs and progress an idea they have dubbed as “being transformers” in an ever-changing and adapting IT landscape. Part of the focus during the keynotes revolved on how customers are demanding specifically what is being done to ensure that certain security measures are in place guaranteeing that their usage of AWS Services will not result in an increase of risk/attack vectors. In response to these customer demands and to bolster their offerings, AWS has announced the general availability of AWS Shield.
Compliance, in general, continues to be of heavy concern to most information technology organizations. The regulations tend to be "under-understood" and understanding their impact to your organization (both from a time, cost and personnel perspective) can be challenging, particularly as they apply to organizations migrating to the cloud. Today, we are going to take a look at some of the most common compliance regulations for information technology, while in a future article we will explore some of the protections and options we have when moving to the cloud.
A key part of SELinux is understanding and using SELinux contexts. Everything on your system contains a context, and these contexts are used to determine which users, applications and services have access to which files, directories and applications. Even without an understanding of detailed policy creation, most SELinux users can manage their systems through using and altering contexts.
Movements like HTTPS Everywhere are working to get all sites on HTTPS. Google is one of the major backers of this movement and will eventually mark all regular HTTP sites as insecure by default in their Chrome browser (see more here). It is important that everyone secure their websites so we can all enjoy a safer Internet. This is where Let's Encrypt comes in. From https://letsencrypt.org/about/: “Let’s Encrypt is a free, automated, and open certificate authority (CA), run for the public’s benefit. Let’s Encrypt is a service provided by the Internet Security Research Group (ISRG).” This means that anyone can request and receive a free SSL certificate to enable secure HTTP traffic. Now that there is no cost needed to receive a certificate, everyone can and should enable HTTPS on their websites. This guide walks you through the basics of getting and applying a Let’s Encrypt SSL certificate to an existing web server. It assumes you already have the web server ready and the DNS records set appropriately. Since this is a getting started guide, we will stick with the Certbot recommended by Let’s Encrypt. There are many other solutions or you can even create your own. See here for more information on other clients.
In the coming weeks, we at the Linux Academy blog will be exploring SELinux -- or Security Enhanced Linux. SELinux provides users fine-grain control over access and permissions on their Linux servers and workstations. Today, we will explore the implications of using SELinux, before diving into SELinux policies and command line options in part two. SELinux is a kernel module written by the NSA and Red Hat that grants system owners extended access control, allowing for a greater permissions profile to constrain users and applications from accessing resources. Beyond the traditional "read, write, execute" permissions on a basic Linux system, SELinux grants administrators the ability to restrict linking, moving and appending files and more. Additionally, access control is defined using policies, which average users cannot alter either purposely or accidentally.
IAM policies are imperative when setting up permissions for your Amazon Web Services resources. Whether you are a one-person shop or part of a large organization, understanding how they work and how to set them up is crucial. If you don't set up IAM policies properly, you will create security holes or you won't have the correct permissions for your users.
As we continue our series of articles on security, today we are going to cover personal security in the form of using PGP keys. Let's talk about what PGP is, why it is important and how we can use it to generate the public and private keys needed to increase our personal security profile for transaction authentication.
If you use a cloud provider for your application and infrastructure needs, tell me quickly: Where is your data? Don't feel bad if you really don't know, most people and organizations have no idea where their data actually resides. Security Conversations There are security implications the moment you move your data off-site and use any outside hosting or cloud provider. How do you protect it from malicious attempts to access? In many cases, your efforts at securing that data may be falling victim to the security conversations that are not happening. Where is it? Physically, where does it reside? That sounds like a loaded question because we all know, in the cloud, it could be anywhere at any time. However, in the case of sensitive corporate or personal data, that could be a problem, right? At any given point in time, the data for your application and the customers that use it is physically somewhere in a data center on one or more physical disks of some kind. A quick look at AWS, for example, provides information about the Availability Zone and Region that an application and its data is in, but nothing specific as to location.
Welcome to this episode of the Cloud Cadet Podcast. Today, Anthony and Christophe sit down with Linux Academy’s core OpenStack instructor, Stephen. Stephen has been an instructor with us for just about two years. He features our Linux Essentials course (which is where we suggest everyone should start), but today he’s going to be talking a little bit about AWS. We’re going to set up a web application firewall, specifically, Barracuda’s WAF. Stephen shows you just how easy it is to use the marketplace to spin up the WAF and configure security groups to get it up and running.