We deploy the real environment, you take the scenario-based labs on us. Hands-on, from anywhere, at any time.
This course is for those new to the Elastic Stack to get an introductory overview of its core services (Elasticsearch, Logstash, Kibana, Beats), features, terms, and basic administration. This course will follow a real-world use case of setting up a log aggregation pipeline for web access logs and analyzing said logs with Kibana via search, visualizations, and dashboards.
If you are looking for "ELK Stack" material, this is the place! Now that Elastic Stack is more than ElasticSearch, LogStash and Kibana, the "ELK" naming convention has been retired!
Big Data Essentials is a comprehensive introduction to the world of Big Data. Starting with the definition of Big Data, we describe the various characteristics of Big Data and its sources. Using real world examples, we highlight the growing importance of Big Data. We discuss architectural requirements and principles of Big Data infrastructures and the intersection of cloud computing with Big Data. We also provide an overview of the most popular Big Data technologies including core Hadoop, the Hadoop ecosystem (Hive, Pig, Sqoop, Flume, Kafka, Storm, Ambari, Oozie, Zookeeper), NoSQL databases and Apache Spark. We conclude this lesson with a tour of the different types of Analytics that can be performed on Big Data and various techniques and tools used.
Hadoop has become a staple technology in the big data industry by enabling the storage and analysis of datasets so big that it would be otherwise impossible with traditional data systems. In this course, we are going to jump right into deploying Hadoop, configuring HDFS, and executing MapReduce jobs. Lastly, you will get to try it all out yourself with a guided hands-on learning activity. So let's get started!
Download the Hadoop Quick Start Interactive Guide here: https://interactive.linuxacademy.com/diagrams/HadoopQuickStart.html
Linux Academy has many innovative tools and services, like your own cloud lab, that cannot be found anywhere else. The robust offering we have developed, which is focused around student success, and how to use that offering can be found here within this course. Anybody looking to maximize their learning and to maximize the value they receive from a Linux Academy membership should go through this course.
Linux Academy provides more content, hands-on labs, your own cloud lab, and more value per dollar than any other training provider.
This course is a replacement for the older "Introduction To Linux Academy" course.
Follow right on the heels of the Elastic Stack Essentials course with the Elasticsearch Deep Dive. Get to understand and go hands-on with the core functionality of Elasticsearch (installing, indexing, querying). Next, learn how to configure it for production use with TLS encryption, user access control, monitoring, and alerting with X-Pack and automated management with Elasticsearch Curator. Get to understand best practices around heap and cluster sizing, hardware requirements, and performing live upgrades.
In this course, you will develop the skills that you need to write effective and powerful scripts and tools using Python. We will go through the necessary features of the Python language to be able to leverage its additional benefits in writing scripts and creating command line tools (data types, loops, conditionals, functions, error handling, and more). Beyond the language itself, you will go through the full development process including project set up, planning, and automated testing to build two different command line tools.
This course begins with explaining the need of Machine Learning and how it originated from Aritificial Intelligence and gave rise to deep learning. We explain important concepts in ML including categories of algorithms, statistical and computer science terms used in model creation, feature engineering, overfitting, generalization, underfitting and cross validation. We also dive into the topic of data science and discuss why ML is an important part of data science.
The course then provides hands on training on Azure Machine Learning, giving a tour of ML Studio, its various features and the concept of an experiment. We demonstrate the process of creating ML experiments and create predictive models to predice automobile prices and generate recommendations for movies.
The exercises in this course allow the student to get familiar with Azure Machine Leaning and gain confidence in exploring the tool further.
As one of the early log aggregation products in the IT industry, Splunk has remained a popular choice amongst system administrators, engineers, and developers for operational analytics. Whether you are aggregating log files, system resource utilization metrics, or application data, Splunk is there to centralize your IT data for easy search and visualization.
This course serves as in introduction to Splunk Enterprise. After getting familiar with some basic terminology and components, you will get to follow along by setting up your own standalone Splunk instance through the Linux Academy Cloud Playground. With your own instance, you can follow along as we secure our standalone Splunk instance, configure monitoring and alerting, and finally index some log data to perform search and visualization analysis.
Big data technologies are some of the most exciting and in-demand skills. These tools power large companies such as Google and Facebook and it is no wonder AWS is spending more time and resources developing certifications, and new services to catalyze the move to AWS big data solutions.
This course will provide you with much of the required knowledge needed to be prepared to take the AWS Big Data Specialty Certification. We will cover the different AWS (and non-AWS!) products and services that appear on the exam. Importantly - we will not cover material you should already have a solid understanding of such as AWS Identity and Access Management, and global infrastructure. For those foundational concepts, definitely review the AWS Certified Developer - Associate Level course here on Linux Academy.
Access The Data Dispatch: https://interactive.linuxacademy.com/diagrams/thedatadispatch.html
Join the Linux Academy community slack for chat here: https://inuxacademy-community-slack.herokuapp.com/ and join the #containers channel.
The Google Cloud Professional Data Engineer is able to harness the power of Google's big data capabilities and make data-driven decisions by collecting, transforming, and visualizing data. Through designing, building, maintaining, and troubleshooting data processing systems with a particular emphasis on the security, reliability, fault-tolerance, scalability, fidelity, and efficiency of such systems, a Google Cloud data engineer is able to put these systems to work.
This course will prepare you for the Google Cloud Professional Data Engineer exam by diving into all of Google Cloud's data services. With interactive demonstrations and an emphasis on hands-on work, you will learn how to master each of Google's big data and machine learning services and become a certified data engineer on Google Cloud.
Download the Data Dossier: https://interactive.linuxacademy.com/diagrams/TheDataDossier.html
Elasticsearch has become a favorite technology of administrators, engineers, and developers alike. Whether you are using it with the rest of the Elastic Stack, or on its own, Elasticsearch is a powerful and user-friendly search and analytics engine. Log aggregation, operational analytics, application performance monitoring, NoSQL databases, site search, and ad-hoc data analysis are just a few of the many things Elasticsearch is used for. The Elastic Certified Engineer Certification was created to recognize IT professionals with expertise in Elasticsearch. An Elastic Certified Engineer can design and deploy a complete Elasticsearch solution.
DISCLAIMER: This is not an official Elastic Company course created by or approved by the Elastic Company. The Linux Academy is in no away affiliated with Elastic Company or ElasticSearch BPV.