Skip to main content

Certified Associate in Python Programming Certification

Course

Intro Video

Photo of Keith Thompson

Keith Thompson

DevOps Training Architect II in Content

Length

10:54:33

Difficulty

Intermediate

Videos

30

Hands-on Labs

9

Quizzes/Exams

1

Course Details

The Certified Associate in Python Programming Certification (PCAP) is a great place to start when getting Python certified. The Python Institute provides multiple certification exams for Python ranging from entry-level to professional-level. This course is designed to teach the fundamentals of Python required to take and pass the Certified Associate in Python Programming Certification exam before moving onto more advanced certifications.

Note: A prerequisite to this course is understanding all the content covered in the Certified Entry-Level Python Programmer Certification course.

Throughout this course we cover:

Lambdas Higher-order functions Modules Packages Classes and objects Exceptions Assertions File IO

Completing this course should enable you to feel more than comfortable taking and passing the Certified Associate in Python Programming Certification exam. More importantly, this course provides a good understanding of the fundamentals of Python programming.

Syllabus

Course Introduction

Getting Started

Course Introduction

00:00:55

Lesson Description:

Python is one of the most versatile and widely used programming languages that exist today. Whether you're working in server administration, web development, or data science, you've likely interacted with a tool written in Python or been asked to write some Python yourself. This course is designed to teach you how to use Python well enough to be able to pass the Certified Associate in Python Programming exam by the Python Institute. Because there is so much overlap with the Certified Entry-Level Python Programmer content, you will need to take that course first before taking this course.

About the Training Architect

00:00:48

Lesson Description:

In this video, you'll learn a little about me, Keith Thompson.

Environment Setup

Installing Python 3.7 on a Cloud Playground

00:09:20

Lesson Description:

Learn how to install Python 3 using pyenv on a CentOS 7 that has code-server pre-installed to provide a full development environment. Note: This course uses Python 3.7 and you will definitely run into issues if you are using Python < 3.7. Picking the Right Cloud Playground Image If you plan on following along with the course on your local workstation, you'll want to make sure you have a good development environment set up. But, if you want to follow along exactly with the course, then you'll want to create a Cloud Playground server (use 2 or 3 units) using the "CentOS 7 w/ code-server" image. This image will give us a server with code-server pre-installed (VS Code running on the server and accessible through the browser). Using code-server to Program on the Server By using the public IP address (or domain name) of the server and its port 8080, we can access code-server from our browser which has a full development environment with a terminal available to us. We'll be redirected to the page being served over HTTPS and, depending on our browser, we'll need to click a few buttons to acknowledge that we know the certificate is self-signed. Installing pyenv Installing Python from source can be a great learning experience, but it is a little tedious. For this course, we're going to instead install pyenv which will allow us to install and switch between multiple different Python versions more easily. To get started, we need to make sure we have some development dependencies installed so we can pull down the pyenv repository. We're using the --skip-broken flag because the "CentOS 7 w/ code-server" playground image already has Git installed. If you're using a different image, you can install Git using the package manager for that system.

sudo yum install -y --skip-broken git gcc zlib-devel bzip2-devel readline-devel sqlite-devel
Now we need to clone the pyenv repository.
$ git clone https://github.com/pyenv/pyenv.git ~/.pyenv
For pyenv to be useful, we'll need to set a few environment variables and run a command when our shell is loading. We'll add those to our ~/.bashrc file so this action happens as soon as our shell is initialized.
$ echo 'export PYENV_ROOT="$HOME/.pyenv"' >> ~/.bashrc
$ echo 'export PATH="$PYENV_ROOT/bin:$PATH"' >> ~/.bashrc
$ echo -e 'if command -v pyenv 1>/dev/null 2>&1; thenn  eval "$(pyenv init -)"nfi' >> ~/.bashrc
Before we can use pyenv, we'll need to reload our shell:
$ exec $SHELL
Finally, let's install Python 3.7.6
$ pyenv install 3.7.6
Now we can check and switch between versions of Python using pyenv. To see the versions available to us, we'll use the pyenv versions command:
$ pyenv versions
* system (set by /home/cloud_user/.pyenv/version)
  3.7.6
To change our active version, we'll use pyenv shell <VERSION>:
$ pyenv shell 3.7.6
$ python --version
Python 3.7.6
We also have an executable for python3 and python3.7 that we can use. To make it apparent what version is being used throughout the course, you'll see that the commands use the python3.7 executable. Upgrade Pip The version of pip that we have might be up-to-date, but it's a good practice to try to update it after the installation. Let's update that now:
$ pip3.7 install --upgrade pip
Collecting pip
  Downloading https://files.pythonhosted.org/packages/57/36/67f809c135c17ec9b8276466cc57f35b98c240f55c780689ea29fa32f512/pip-20.0.1-py2.py3-none-any.whl (1.5MB)
     |????????????????????????????????| 1.5MB 3.1MB/s
Installing collected packages: pip
  Found existing installation: pip 19.2.3
    Uninstalling pip-19.2.3:
      Successfully uninstalled pip-19.2.3
Successfully installed pip-20.0.1
If you chose to install the Python extension, be sure to press the blue button to reload. Click Install at the bottom right-hand corner to install Pylint. Pylint is a source-code, bug, and quality checker for the Python programming language.

Functions and Modules

Lambdas and Collection Functions

Defining and Using Lambdas

00:05:01

Lesson Description:

Sometimes we want to group some lines of code so that they can be reused, but creating a named function feels unnecessary. In these situations, such as when we're acting on the items of a collection, we can use lambdas or "lambda expressions" to create anonymous functions. In this lesson, we'll learn how to create and use lambdas. Documentation For This VideoLambda ExpressionsWhat Is an Anonymous Function? An anonymous function refers to a function that doesn't have a name. If we try to classify what makes a function, we can break it into a few requirements:A name A list of parameters A function body An optional return valueThe name portion is only a requirement so that we can reference it later. But if we want a "function" that isn't useful anywhere else besides the current context, we can define a function without a name by using the lambda keyword to create a "lambda expression". We could still assign it to a variable if we want. There is one catch: the lambda's body can only be one expression. Creating a Lambda To learn more about lambdas, let's create a folder called lambdas-and-collections. Within it, let's create a new file called learning_lambdas.py.

$ mkdir ~/lambdas-and-collections
$ cd ~/lambdas-and-collections
The easiest way to wrap our minds around lambdas is to convert an existing function into one. Let's convert the following square function to be a lambda: ~/lambdas-and-collections/learning_lambdas.py
def square(num):
    return num * num
Lambdas are most commonly used for single-line functions like this, and the last expression in the lambda is always returned. With that in mind, this function could be written as a lambda like this: ~/lambdas-and-collections/learning_lambdas.py
def square(num):
    return num * num

square_lambda = lambda num : num * num
To call our lambda, we use parenthesis the same way we do when calling a function. Let's ensure that this function and lambda are equivalent by adding an assert statement to the end of the file before running the file: ~/lambdas-and-collections/learning_lambdas.py
def square(num):
    return num * num

square_lambda = lambda num : num * num

assert square(3) == square_lambda(3)
If we run this file, we should see that there are no errors:
$ cd ~/lambdas-and-collections
$ python3.7 learning_lambdas.py
$
Now that we know how to define and call lambda expressions, we're ready to learn about when we're most likely to put them to use: when using collection functions.

Using Collection Functions

00:20:11

Lesson Description:

When thinking of times where we might want to use a lambda function, we’ll also want to think about when we might want to repeat a single expression. The most common example is when we want to process a collection using a function that can also take a function as an argument. Some examples of functions like these are:map filter reduce reversed sortedThere's also the sort method on the list type. Documentation For This VideoLambda Expressions The map function The filter function The reduce function The reversed function The sorted function The list.sort methodWhat are "Higher-Order Functions"? When a function or method takes a function as an argument (or returns a function), it is called a "higher-order function". This term isn't explicitly mentioned anywhere in the PCAP syllabus, but it is worth knowing. Most of the higher-order functions that we'll be working with take a collection and a function as parameters so that each item in the collection can be processed by the function argument. Let's start digging into some of these functions. The map Function In mathematics, a list of potential arguments for a function is called a "domain" and for each domain there's a corresponding "range" of the same length that is the return value of the function given each item in the domain. The map function takes a function as the first argument and a collection that acts as the domain. It returns the range. Now that we have the math talk out of the way, let's see this in practice by creating a file called collection_funcs.py: ~/lambdas-and-collections/collection_funcs.py

domain = [1, 2, 3, 4, 5]
our_range = map(lambda num: num * 2, domain)
print(list(our_range))
Note that the result of map is an iterable, but it is not a list, so it wouldn't print out the way we'd like. So we need to first convert it to a list. The function we're mapping over doubles the provided number. Let's run this file to see what is printed.
$ cd ~/lambdas-and-collections
$ python3.7 collection_funcs.py
[2, 4, 6, 8, 10]
The filter Function Like map, the filter function takes a function and a collection. However, instead of returning the result of the function argument for each item, it returns an iterator that contains only the values from the list if the function returns a true result when using that value as an argument. This allows us to filter the collection based on a specific condition. Let's give it a shot by filtering a list down to only return results that are even: ~/lambdas-and-collections/collection_funcs.py
domain = [1, 2, 3, 4, 5]
our_range = map(lambda num: num * 2, domain)
print(list(our_range))

evens = filter(lambda num: num % 2 == 0, domain)
print(list(evens))
Now we can run this and we should see the two even values returned:
$ python3.7 collection_funcs.py
[2, 4, 6, 8, 10]
[2, 4]
The reduce Function The reduce function is not quite as straight forward as map and filter. When we reduce a collection, we're going to utilize the values within the collection to eventually create a final single result. An example of a function that reduces the list is the sum function, which returns the result of all the items in a list being added together. To make this possible, we need an extra argument: a starting value. We also need the function that we pass to the reduce function to take two arguments: the accumulated value and the current item from the collection. To help solidify these ideas, let's reimplement sum by using reduce. The reduce function used to be a built-in function but was moved into the functools module in Python 3, so we'll need to import that module to use the function. We haven't covered importing modules yet, but we will in the coming section. For the time being, just copy the from ... line of code and know that it gives us access to the reduce function: ~/lambdas-and-collections/collection_funcs.py
from functools import reduce

domain = [1, 2, 3, 4, 5]
our_range = map(lambda num: num * 2, domain)
print(list(our_range))

evens = filter(lambda num: num % 2 == 0, domain)
print(list(evens))

the_sum = reduce(lambda acc, num: acc + num, domain, 0)
print(the_sum)
Let's break down our lambda. Whatever is returned by the lambda's expression will be used as the acc value of the next iteration. For this to work, we need to have an initial value for acc, and that's what the third argument is. By setting our initial value to 0, we can perform the addition on the first iteration. From that point on, we'll be able to continue adding to the previous result. Sorting Functions and Methods In the PCEP course, we covered the sorted and reversed functions, but we never talked about the key parameter. While not entirely obvious based on the parameter name, the key parameter takes a function that each item in the collection will be processed with before the comparison is run to determine the order. The list.sort method also has a key parameter, but unlike the sorted function, the list will be changed in-place. To demonstrate how the key parameter can be useful, let's take a look at how it can be used to alphabetize a list of words: ~/lambdas-and-collections/collection_funcs.py
from functools import reduce

domain = [1, 2, 3, 4, 5]
our_range = map(lambda num: num * 2, domain)
print(list(our_range))

evens = filter(lambda num: num % 2 == 0, domain)
print(list(evens))

the_sum = reduce(lambda acc, num: acc + num, domain, 0)
print(the_sum)

words = ['Boss', 'a', 'Alfred', 'fig', 'Daemon', 'dig']
print("Sorting by default")
print(sorted(words))

print("Sorting with a lambda key")
print(sorted(words, key=lambda s: s.lower()))
We can see here that the default sorted result is:
['Alfred', 'Boss', 'Daemon', 'a', 'dig', 'fig']
This isn't quite what we're wanting. By passing in a key function that converts strings to lowercase before making the comparison, we're able to get a more accurate result:
['a', 'Alfred', 'Boss', 'Daemon', 'dig', 'fig']
Finally, let's sort the words list in place using the list.sort method: ~/lambdas-and-collections/collection_funcs.py
from functools import reduce

domain = [1, 2, 3, 4, 5]
our_range = map(lambda num: num * 2, domain)
print(list(our_range))

evens = filter(lambda num: num % 2 == 0, domain)
print(list(evens))

the_sum = reduce(lambda acc, num: acc + num, domain, 0)
print(the_sum)

words = ['Boss', 'a', 'Alfred', 'fig', 'Daemon', 'dig']
print("Sorting by default")
print(sorted(words))

print("Sorting with a lambda key")
print(sorted(words, key=lambda s: s.lower()))

print("Sorting with a method")
words.sort(key=str.lower, reverse=True)
print(words)
This last example shows us passing the str.lower method as the key, instead of creating a lambda that does this. While they are a little confusing, the following lines are equivalent:
'my_STR'.lower()
str.lower('my_STR')
Now we have a better idea of how we can pass functions and lambdas to other functions.

Hands-on Labs are real live environments that put you in a real scenario to practice what you have learned without any other extra charge or account to manage.

00:30:00

The `if` Operator

The `if` Operator

00:03:47

Lesson Description:

The PCAP syllabus includes the line "the if operator". This is a term only found in this syllabus. The term that should have been used was "conditional expressions". In this lesson, we'll learn what conditional expressions are and how we can use them. Documentation For This VideoConditional ExpressionsWhat Is a Conditional Expression? The term "conditional expression" is sometimes referred to as a "ternary expression" in other languages, but sometimes we want a single line to do one thing or another based on a condition. For example, we want to set a variable to one value if a condition is true or a different value if the condition is false. Here's what this would look like using a conditional statement:

if CONDITION:
    my_var = 1
else:
    my_var = 2
Using a conditional expression, we can do this in a single line:
my_var = 1 if CONDITION else 2
This syntax isn't restricted to variable assignment, but it is a common usage. If we wanted to print a different message based on a condition we can also do that using a conditional expression:
print("something") if 1 > 2 else print("something else")
Or if we want to simplify this further, we could let the conditional expression return the value directly to the print function:
print("something" if 2 > 1 else "something else")

Hands-on Labs are real live environments that put you in a real scenario to practice what you have learned without any other extra charge or account to manage.

00:30:00

Modules and Packages

Creating and Using Python Modules

00:07:18

Lesson Description:

To have truly reusable code, we need to access functions, variables, and objects that have already been written. Thus we need to have a way to share our code. This is where modules and packages are useful. In this lesson, we demonstrate how to create our first Python module and access its contents from a different Python program. Documentation for This VideoPython Modules Documentation The import StatementWhat Is a Module? Working with Python it's very easy to define new functions and assign values to variables that we would like to use multiple times. It would be great if we could write these useful pieces of code once and then use them whenever we need them. Thankfully, we can do just that because of modules. In Python, a module is just a Python file. This means that we can use modules to divide our code into logical groupings by putting them into separate modules and then pulling those modules into our scripts or applications when we need them. Creating Our First Module To demonstrate how to create and use modules, let's create a new directory called using_modules. Within it, we'll define our first module by creating the using_modules/helpers.py file.

$ mkdir ~/using_modules
$ cd ~/using_modules
$ touch helpers.py
Within helpers.py, we're placing some functions that we think will be generally useful and likely to be used in other files. Let's write a few functions that can manipulate strings. ~/using_modules/helpers.py
def extract_upper(phrase):
    return list(filter(str.isupper, phrase))

def extract_lower(phrase):
    return list(filter(str.islower, phrase))
Now we have two functions defined and we'd like to use them in other scripts and modules. Using Our Module from Another Script For this section of the course, we're going to be putting our example code into a script called main.py. Let's create that script now and look at what we can do to pull in these functions so that we can use them. The key to working with modules is the import statement. We're going to dig deeper into all that we can do while importing modules in the next lesson. But for now, we're going to leverage the fact that we can import modules in the same directory as our script by referencing them by their file name minus the extension. In our case, this will be helpers. ~/using_modules/main.py
import helpers
Before we use our functions, let's make sure that this file is valid by running it.
$ python3.7 main.py
$
No output is a good sign. To utilize the functions defined in our module, we'll add a period to the end of our module name (i.e. the file name) and then type the name of our function to call it as we otherwise would. ~/using_modules/main.py
import helpers

name = "Keith Thompson"
print(f"Lowercase letters: {helpers.extract_lower(name)}")
print(f"Uppercase letters: {helpers.extract_upper(name)}")
Let's run this and verify it works as expected.
$ python3.7 main.py
Lowercase letters: ['e', 'i', 't', 'h', 'h', 'o', 'm', 'p', 's', 'o', 'n']
Uppercase letters: ['K', 'T']
Perfect! Now we know the simplest way to define and use modules. In the next lesson, we'll dig deeper into the various ways and places that we can import modules.

Importing Modules

00:05:29

Lesson Description:

Python provides a few different ways to import modules and packages. In this lesson, we'll take a look at how importing works and the various ways we can import definitions from a module. Documentation for This VideoPython Modules Documentation The import StatementThe Standard import Statement When we learned how to create a module, we also learned how to import the module as a singular entity into other Python files. To reiterate this, we use the following format to import an entire module under its namespace.

import my_module_name
By doing this, we're able to access anything exposed by the module by chaining off of the module's name. Occasionally, we might have a naming conflict when importing a module. In those cases, we can also use the keyword as in the import statement to change the identifier that we use to represent the module. Let's change our using_modules/main.py so that the helpers module is accessed using the h name. ~/using_modules/helpers.py
import helpers as h

name = "Keith Thompson"
print(f"Lowercase letters: {h.extract_lower(name)}")
print(f"Uppercase letters: {h.extract_upper(name)}")
The name of h isn't great, but it does demonstrate that we can change the name of modules when we import them. If we run this script, we will see there's no difference in the output.
$ python3.7 main.py
Lowercase letters: ['e', 'i', 't', 'h', 'h', 'o', 'm', 'p', 's', 'o', 'n']
Uppercase letters: ['K', 'T']
Importing from More often than not, we don't need to use everything provided by a module. In these cases, we can leverage the from version of an import statement to import only the definitions we need from the module, and then we can access them directly. To demonstrate how to do this for multiple functions, let's directly import the functions from our helpers module. The from statement works like this:
from <MODULE_NAME> import <definition>, <definition>, <etc.>
Here's what it looks like in main.py: ~/using_modules/main.py
from helpers import extract_lower, extract_upper

name = "Keith Thompson"
print(f"Lowercase letters: {extract_lower(name)}")
print(f"Uppercase letters: {extract_upper(name)}")
It's worth noting that now we don't have access to the helpers name in our code at all. If we change our extract_upper line to be chained off of helpers name it will cause an error.
$ python3.7 main.py
Lowercase letters: ['e', 'i', 't', 'h', 'h', 'o', 'm', 'p', 's', 'o', 'n']
Traceback (most recent call last):
  File "using_modules/main.py", line 5, in <module>
    print(f"Uppercase letters: {helpers.extract_upper(name)}")
NameError: name 'helpers' is not defined
Lastly, we can also combine the as keyword with each definition that we're importing to explicitly rename that definition. ~/using_modules/main.py
from helpers import extract_lower as e_low, extract_upper

name = "Keith Thompson"
print(f"Lowercase letters: {e_low(name)}")
print(f"Uppercase letters: {extract_upper(name)}")
Importing Everything from a Module The final way we can import definitions from a module is to import all of them at once by using *. This is generally not the recommended way of importing things, but sometimes a module provides a lot of functions that we'll be using, and we don't want to explicitly import them one at a time. Let's utilize the * to import our two functions from the helpers module without explicitly naming them. ~/using_modules/main.py
from helpers import *

name = "Keith Thompson"
print(f"Lowercase letters: {extract_lower(name)}")
print(f"Uppercase letters: {extract_upper(name)}")
Once again, if we run this, it will work just as it did before.
$ python3.7 main.py
Lowercase letters: ['e', 'i', 't', 'h', 'h', 'o', 'm', 'p', 's', 'o', 'n']
Uppercase letters: ['K', 'T']

Executing Modules as Scripts

00:09:03

Lesson Description:

Python modules are just files, but sometimes we want them to behave slightly differently if they're being run directly. In this lesson, we'll learn about how modules are interpreted when imported and also how to only run code when a module is run directly by using the __name__ variable. Documentation for This VideoPython Modules Documentation The import Statement The __name__ VariableExpressions in a Module Since modules are just Python files, they can contain expressions and the file will be interpreted from top to bottom. So a few good questions to ask ourselves are:When is a module interpreted? Can a module be interpreted twice?To test this, let's create another module that imports our helpers module and also import that new module into our main.py. We'll call this module extras.py. ~/using_modules/extras.py

print("Importing 'helpers' in 'extras'")
import helpers

name = "Keith Thompson"
In main.py, let's import extras. ~/using_modules/main.py
print("We're importing 'helpers' from 'main'")
from helpers import *

print("We're importing 'extras' from 'main'")
import extras

print(f"Lowercase letters: {extract_lower(extras.name)}")
print(f"Uppercase letters: {extract_upper(extras.name)}")
Finally, in helpers.py we'll add print, so that we can see when it is run and how many times it is run. ~/using_modules/helpers.py
def extract_upper(phrase):
    return list(filter(str.isupper, phrase))

def extract_lower(phrase):
    return list(filter(str.islower, phrase))

print("HELLO FROM HELPERS")
We now have enough print lines to helps us really see how main.py is processed and when our modules are interpreted. When we run it, this is what we see:
$ python3.7 main.py
We're importing 'helpers' from 'main'
HELLO FROM HELPERS
We're importing 'extras' from 'main'
We're import 'helpers' from 'extras'
Lowercase letters: ['e', 'i', 't', 'h', 'h', 'o', 'm', 'p', 's', 'o', 'n']
Uppercase letters: ['K', 'T']
As we can see, the code within the helpers module was only interpreted the first time that it was imported. So even though it was imported into two different modules, it was only ever run one time. Running a Module Directly Ideally, we don't want to run this print line when our module is imported, but sometimes we do want a module to execute something if it is run directly. To handle this, we can access the __name__ variable. The __name__ variable is set in each module and can be used to determine if the module is being run directly as opposed to being imported. Let's change the various print lines from our previous lesson to help us understand the values set to __name__ in each of our scripts. ~/using_modules/main.py
from helpers import *
import extras

print(f"__name__ in main.py: {__name__}")

print(f"Lowercase letters: {extract_lower(extras.name)}")
print(f"Uppercase letters: {extract_upper(extras.name)}")
~/using_modules/helpers.py
def extract_upper(phrase):
    return list(filter(str.isupper, phrase))

def extract_lower(phrase):
    return list(filter(str.islower, phrase))

print(f"__name__ in helpers.py: {__name__}")
print("HELLO FROM HELPERS")
~/using_modules/extras.py
import helpers

print(f"__name__ in extras.py: {__name__}")

name = "Keith Thompson"
Here's what we see when we run main.py:
$ python3.7 main.py
__name__ in helpers.py: helpers
HELLO FROM HELPERS
__name__ in extras.py: extras
__name__ in main.py: __main__
Lowercase letters: ['e', 'i', 't', 'h', 'h', 'o', 'm', 'p', 's', 'o', 'n']
Uppercase letters: ['K', 'T']
All of the modules that we imported have __name__ set to the actual module name, but main.py is set to __main__ because it is running in the main context. A common pattern is to add a condition like this if we want to add functionality to a module only if it is running in the main context:
if __name__ == "__main__":
    print("Something only when running in main scope")
To demonstrate this, let's remove all of these debugging lines, but move "HELLO FROM HELPERS" into this conditional in helpers.py. (We're only showing the change to helpers.py, but we removed all of the '__name__ in ...' output) ~/using_modules/helpers.py
def extract_upper(phrase):
    return list(filter(str.isupper, phrase))

def extract_lower(phrase):
    return list(filter(str.islower, phrase))

if __name__ == "__main__":
    print("HELLO FROM HELPERS")
If we now run main.py we should see the following:
$ python3.7 main.py
Lowercase letters: ['e', 'i', 't', 'h', 'h', 'o', 'm', 'p', 's', 'o', 'n']
Uppercase letters: ['K', 'T']
If we run helpers.py directly, we should see the print line being run.
$ python3.7 helpers.py
HELLO FROM HELPERS

Hiding Module Entities

00:04:46

Lesson Description:

Now that we know how to import our modules, we might want to restrict what is exposed. In this lesson, we'll look at how we can hide some of our module's contents from being imported by other modules and scripts. Documentation for This VideoPython Modules DocumentationWhat Are Module Entities? When we see module entities, we need to see variables, functions, and classes (we'll cover classes in the next section). A module entity is anything we provide with a name in our module. As we've seen, these things are importable by name when we used from <module> import <name>. Using __all__ If we want to prevent someone from importing an entity from our module, there aren't very many options. There are only two reasonable things we can do to restrict what is imported if someone uses from <module> import *. The first is by setting the __all__ variable in our module. Let's test this out by setting __all__ to a list including only extract_upper to see what happens in main.py. ~/using_modules/helpers.py

__all__ = ["extract_upper"]

def extract_upper(phrase):
    return list(filter(str.isupper, phrase))

def extract_lower(phrase):
    return list(filter(str.islower, phrase))

if __name__ == "__main__":
    print("HELLO FROM HELPERS")
In main.py, we had been using both of these functions after loading them with from helpers import *. Here's another look at what main.py currently looks like. ~/using_modules/main.py
from helpers import *
import extras

print(f"Lowercase letters: {extract_lower(extras.name)}")
print(f"Uppercase letters: {extract_upper(extras.name)}")
With __all__ set in helpers, let's run main.py to see what happens.
$ python3.7 main.py
Traceback (most recent call last):
  File "main.py", line 4, in <module>
    print(f"Lowercase letters: {extract_lower(extras.name)}")
NameError: name 'extract_lower' is not defined
Although name exists within helpers.py, it is not available in other modules via from helpers import *. This does not mean that we can't explicitly import extract_lower though. Let's modify main.py to import extract_lower by name. ~/using_modules/main.py
from helpers import *
from helpers import extract_lower
import extras

print(f"Lowercase letters: {extract_lower(extras.name)}")
print(f"Uppercase letters: {extract_upper(extras.name)}")
Let's run this one more time.
$ python3.7 main.py
Lowercase letters: ['e', 'i', 't', 'h', 'h', 'o', 'm', 'p', 's', 'o', 'n']
Uppercase letters: ['K', 'T']
While it doesn't allow us to prevent an entity from ever being imported, using __all__ does provide a way of sometimes restricting what is imported by modules and scripts consuming our modules and packages. Using Underscored Entities The other way we can prevent an entity from being exported automatically when someone uses from <module> import * is by making the first character an underscore (_). If we removed __all__ from helpers.py and created a variable called _hidden_var = "test", we would not have access to _hidden_var after running from helpers import *.

The Module Search Path

00:05:24

Lesson Description:

We've seen how to create our modules, and we've been able to import them from scripts adjacent to them in the file system, but where else can we import modules from? Documentation For This VideoPython Modules Documentation Python Standard LibaryWhere Do Modules Come From? Python is a language with a large and powerful standard library of modules. To use these modules, we need to import them the same way that we've been importing our local modules, but how does Python know where to find the code for these modules? To understand this we need to look at the module search path. When Python goes looking for a module it has a path that works very much like the PATH variable used by our shell to find executables. A few different things are combined to make this path:The directory containing the running script is automatically the first item in the search path. When running the REPL this will be the current directory. The values set in the PYTHONPATH environment variable (if it is set) will be next in the list. Finally, a list of directories configured when Python was installed. This list contains paths to directories that have the standard library modules and other packages we've installed.If we want to see the module search path, we can import the [sys][3] module and view the path variable. Let's do this from a REPL.

$ python3.7
Python 3.7.6 (default, Jan 29 2020, 21:20:26)
[GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import sys
>>> sys.path
['', '/home/cloud_user/.pyenv/versions/3.7.6/lib/python37.zip', '/home/cloud_user/.pyenv/versions/3.7.6/lib/python3.7', '/home/cloud_user/.pyenv/versions/3.7.6/lib/python3.7/lib-dynload', '/home/cloud_user/.pyenv/versions/3.7.6/lib/python3.7/site-packages']
>>> exit()
Our Python install is in ~/.pyenv/versions/3.7.6, and the directories within contain the standard library. The site-packages directory contains third-party packages that we might install. Just to show that we can change this, let's set the PYTHONPATH environment variable when starting the REPL.
$ PYTHONPATH=/home/cloud_user python3.7
Python 3.7.6 (default, Jan 29 2020, 21:20:26)
[GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import sys
>>> sys.path
['', '/home/cloud_user', '/home/cloud_user/.pyenv/versions/3.7.6/lib/python37.zip', '/home/cloud_user/.pyenv/versions/3.7.6/lib/python3.7', '/home/cloud_user/.pyenv/versions/3.7.6/lib/python3.7/lib-dynload', '/home/cloud_user/.pyenv/versions/3.7.6/lib/python3.7/site-packages']
>>> exit()
Now we can see that /home/cloud_user is the second item in the list. If we don't have a package in our current directory (the '' in the list), then it will check items passed in via PYTHONPATH before looking at items provided by our Python installation. Note: Python will search for a built-in module by name before searching the paths in sys.path. This means you can't accidentally create a module with the same name as a built-in module, which prevents you from overwriting the built-in module.

Creating and Using Python Packages

00:07:25

Lesson Description:

Python modules are simply Python files, but they are not the only way we can bundle up our code for reuse. Modules are not that easy to share. The primary way we share code is by wrapping our modules into packages. In this lesson, we'll learn what it takes to create a Python package. Documentation for This VideoPython Packages Documentation Implicit Namespace PackagesWhat Is a Package in Python? A package is a namespace that allows us to group modules together. We create a package in Python by creating a directory to hold our modules and adding a special file named __init__.py. To show how a package can allow us to organize our code even more, let's create a helpers directory within using_modules. Let's create an empty __init__.py file within that directory.

$ mkdir ~/using_modules/helpers
$ touch ~/using_modules/helpers/__init__.py
The __init__.py doesn't need to have anything in it, though we can and will use it later. Next, let's move our helpers.py file into the helpers directory and change its name to strings.py since this file holds helper functions completely focused on working with strings. Our extras.py module actually doesn't do anything besides defining variables, so let's move it into helpers as helpers/variables.py.
$ cd ~/using_modules
$ mv helpers.py helpers/strings.py
$ mv extras.py helpers/variables.py
We now have a package that contains two modules, but we also broke main.py. Let's change main.py to use our package, instead of the modules that we had before. ~/using_modules/main.py
from helpers.strings import extract_lower, extract_upper
from helpers import variables
import helpers

print(f"Lowercase letters: {extract_lower(variables.name)}")
print(f"Uppercase letters: {extract_upper(variables.name)}")
print(f"From helpers: {helpers.strings.extract_lower(variables.name)}")
The things to note here are that we can access the modules within our packages by importing them directly like with variables and by chaining them off of the package name to import entities directly from the child module. Just like we can with a module, we're able to import the package directly. Running main.py again we should see:
$ python3.7 main.py
Lowercase letters: ['e', 'i', 't', 'h', 'h', 'o', 'm', 'p', 's', 'o', 'n']
Uppercase letters: ['K', 'T']
From helpers: ['e', 'i', 't', 'h', 'h', 'o', 'm', 'p', 's', 'o', 'n']
What Does __init__.py Do? The mysterious __init__.py file is used to set up the initialization code for a package, but what does this mean? This means that when the first subpackage or module within the parent package is accessed, then the code within __init__.py gets executed. The primary other thing we can do with our __init__.py is define the __all__ value for when we use from <package> import *. This doesn't immediately make sense because our __init__.py doesn't define anything right now, but we can import parts from our submodules and then make those immediately available if someone imports our package. Let's modify helpers/__init__.py to do just that. ~/using_modules/helpers/__init__.py
__all__ = ['extract_upper']

from .strings import *
The syntax of .strings allows us to specify that we want to load the strings module within our package, regardless of what our package is named. This is just a way to be a little more explicit. Let's change our main.py to use this. ~/using_modules/main.py
from helpers.strings import extract_lower
from helpers import variables
from helpers import *
import helpers

print(f"Lowercase letters (from strings): {extract_lower(variables.name)}")
print(f"Uppercase letters (from package): {extract_upper(variables.name)}")
print(f"Off of helpers: {helpers.strings.extract_lower(variables.name)}")
Once again, let's run our script to see that this code works.
$ python3.7 main.py
Lowercase letters (from strings): ['e', 'i', 't', 'h', 'h', 'o', 'm', 'p', 's', 'o', 'n']
Uppercase letters (from package): ['K', 'T']
Off of helpers: ['e', 'i', 't', 'h', 'h', 'o', 'm', 'p', 's', 'o', 'n']
Implicit Namespace Packages While the PCAP syllabus doesn't actually mention implicit namespace packages, it is worth noting that they exist. As of Python 3.3, if we're creating a package that doesn't need to do anything with the __init__.py, then we can skip creating the __init__.py entirely and our package will work just fine.

Distributing and Installing Packages

00:20:32

Lesson Description:

Packages are so important when working in Python because the community has published so many useful packages that can prevent us from needing to write that code ourselves. Additionally, we can share our own code with others by setting up our packages for distribution. Documentation for This VideoDistributing Packages and Setuptools The Python Package Index pip requests PyPi PageInstalling Packages Before we look at how we can go about making our own packages installable, let's cover installing a package from someone else. The primary place we'll be installing packages from will be from the "Python Package Index" or "PyPi" for short. To install packages, we'll use pip. Let's install one of the most popular Python packages, the requests package.

$ pip3.7 install requests
Collecting requests
  Downloading https://files.pythonhosted.org/packages/51/bd/23c926cd341ea6b7dd0b2a00aba99ae0f828be89d72b2190f27c11d4b7fb/requests-2.22.0-py2.py3-none-any.whl (57kB)
     |????????????????????????????????| 61kB 2.4MB/s
Collecting certifi>=2017.4.17 (from requests)
  Downloading https://files.pythonhosted.org/packages/b9/63/df50cac98ea0d5b006c55a399c3bf1db9da7b5a24de7890bc9cfd5dd9e99/certifi-2019.11.28-py2.py3-none-any.whl (156kB)
     |????????????????????????????????| 163kB 8.0MB/s
Collecting idna<2.9,>=2.5 (from requests)
  Downloading https://files.pythonhosted.org/packages/14/2c/cd551d81dbe15200be1cf41cd03869a46fe7226e7450af7a6545bfc474c9/idna-2.8-py2.py3-none-any.whl (58kB)
     |????????????????????????????????| 61kB 10.8MB/s
Collecting urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 (from requests)
  Downloading https://files.pythonhosted.org/packages/e8/74/6e4f91745020f967d09332bb2b8b9b10090957334692eb88ea4afe91b77f/urllib3-1.25.8-py2.py3-none-any.whl (125kB)
     |????????????????????????????????| 133kB 10.8MB/s
Collecting chardet<3.1.0,>=3.0.2 (from requests)
  Downloading https://files.pythonhosted.org/packages/bc/a9/01ffebfb562e4274b6487b4bb1ddec7ca55ec7510b22e4c51f14098443b8/chardet-3.0.4-py2.py3-none-any.whl (133kB)
     |????????????????????????????????| 143kB 12.8MB/s
Installing collected packages: certifi, idna, urllib3, chardet, requests
Successfully installed certifi-2019.11.28 chardet-3.0.4 idna-2.8 requests-2.22.0 urllib3-1.25.8
$
The requests package has some dependencies on other packages so pip will go ahead and download those dependencies. For the purposes of the PCAP exam, we just need to know how to install packages, but it is definitely worth viewing the other commands provided by pip by running pip --help. Making a Package Installable To make a package installable, it needs to have a file in the root of the package called setup.py. The structure of installable packages can vary, but the presence of a setup.py is constant. Let's make our helpers package installable by adding a setup.py and configuring it using the setup function. The "Python Packaging Authority" is the working group that maintains the core projects use for Python packaging, and they provide an example project. We're going to take the setup.py from that project as a starting point and modify it for our purposes. To begin, we do need to change our helpers directory to be the container for our installable package (different than a "python package"). Let's move things around before creating our setup.py.
$ cd ~/using_modules
$ mkdir -p helpers/src/helpers
$ mv helpers/*.py helpers/src/helpers/
Now our directory structure for helpers looks like this:
$ tree helpers
helpers/
     |---> src
           |---> helpers
                |---> __init__.py
                |---> strings.py
                |---> variables.py

2 directories, 3 files
The outer helpers directory is there just to hold onto our code and isn't actually a Python package. The inner helpers will provide the package that can be imported after the distribution of this code is installed. For our code to be installable, we still need a setup.py, and this will go in the outer helpers directory. Feel free to download it directly using the curl command or copy and paste the contents below.
$ cd helpers/
$ curl -O https://raw.githubusercontent.com/pypa/sampleproject/master/setup.py
Here's what it will look like: ~/using_modules/helpers/setup.py
from setuptools import setup, find_packages
from os import path

here = path.abspath(path.dirname(__file__))

# Get the long description from the README file
with open(path.join(here, 'README.md'), encoding='utf-8') as f:
    long_description = f.read()

setup(
    name='helpers', # Required
    version='1.0.0', # Required
    description='Our custom collection of helper functions and variables.', # Optional
    # long_description=long_description, # Optional
    # long_description_content_type='text/markdown', # Optional (the README is markdown so we want to set this)
    # url='https://github.com/pypa/sampleproject', # Optional
    author='Keith Thompson',  # Optional
    author_email='keith@linuxacademy.com',  # Optional

    # Classifiers help users find your project by categorizing it.
    #
    # For a list of valid classifiers, see https://pypi.org/classifiers/
    classifiers=[  # Optional
        # How mature is this project? Common values are
        #   3 - Alpha
        #   4 - Beta
        #   5 - Production/Stable
        'Development Status :: 3 - Alpha',

        # Indicate who your project is intended for
        'Intended Audience :: Developers',
        'Topic :: Software Development :: Build Tools',

        # Pick your license as you wish
        'License :: OSI Approved :: MIT License',

        # Specify the Python versions you support here. In particular, ensure
        # that you indicate whether you support Python 2, Python 3 or both.
        # These classifiers are *not* checked by 'pip install'. See instead
        # 'python_requires' below.
        'Programming Language :: Python :: 3',
        'Programming Language :: Python :: 3.5',
        'Programming Language :: Python :: 3.6',
        'Programming Language :: Python :: 3.7',
        'Programming Language :: Python :: 3.8',
    ],
    keywords='helpers',  # Optional

    # When your source code is in a subdirectory under the project root, e.g.
    # `src/`, it is necessary to specify the `package_dir` argument.
    package_dir={'': 'src'},  # Optional

    # You can just specify package directories manually here if your project is
    # simple. Or you can use find_packages().
    #
    # Alternatively, if you just want to distribute a single Python file, use
    # the `py_modules` argument instead as follows, which will expect a file
    # called `my_module.py` to exist:
    #
    #   py_modules=["my_module"],
    #
    packages=find_packages(where='src'),  # Required
    # Specify which Python versions you support. In contrast to the
    # 'Programming Language' classifiers above, 'pip install' will check this
    # and refuse to install the project if the version does not match. If you
    # do not support Python 2, you can simplify this to '>=3.5' or similar, see
    # https://packaging.python.org/guides/distributing-packages-using-setuptools/#python-requires
    python_requires='!=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, <4',

    # This field lists other packages that your project depends on to run.
    # Any package you put here will be installed by pip when your project is
    # installed, so they must be valid existing projects.
    #
    # For an analysis of "install_requires" vs pip's requirements files see:
    # https://packaging.python.org/en/latest/requirements.html
    # install_requires=['peppercorn'],  # Optional

    # List additional groups of dependencies here (e.g. development
    # dependencies). Users will be able to install these using the "extras"
    # syntax, for example:
    #
    #   $ pip install sampleproject[dev]
    #
    # Similar to `install_requires` above, these must be valid existing
    # projects.
    # extras_require={  # Optional
    #     'dev': ['check-manifest'],
    #     'test': ['coverage'],
    # },

    # If there are data files included in your packages that need to be
    # installed, specify them here.
    #
    # If using Python 2.6 or earlier, then these have to be included in
    # MANIFEST.in as well.
    # package_data={  # Optional
    #     'sample': ['package_data.dat'],
    # },

    # Although 'package_data' is the preferred approach, in some case you may
    # need to place data files outside of your packages. See:
    # http://docs.python.org/3.4/distutils/setupscript.html#installing-additional-files
    #
    # In this case, 'data_file' will be installed into '<sys.prefix>/my_data'
    # data_files=[('my_data', ['data/data_file'])],  # Optional

    # To provide executable scripts, use entry points in preference to the
    # "scripts" keyword. Entry points provide cross-platform support and allow
    # `pip` to create the appropriate form of executable for the target
    # platform.
    #
    # For example, the following would provide a command called `sample` which
    # executes the function `main` from this package when invoked:
    # entry_points={  # Optional
    #     'console_scripts': [
    #         'sample=sample:main',
    #     ],
    # },

    # List additional URLs that are relevant to your project as a dict.
    #
    # This field corresponds to the "Project-URL" metadata fields:
    # https://packaging.python.org/specifications/core-metadata/#project-url-multiple-use
    #
    # Examples listed include a pattern for specifying where the package tracks
    # issues, where the source is hosted, where to say thanks to the package
    # maintainers, and where to support the project financially. The key is
    # what's used to render the link text on PyPI.
    # project_urls={  # Optional
    #     'Bug Reports': 'https://github.com/pypa/sampleproject/issues',
    #     'Funding': 'https://donate.pypi.org',
    #     'Say Thanks!': 'http://saythanks.io/to/example',
    #     'Source': 'https://github.com/pypa/sampleproject/',
    # },
)
We left a lot of comments in there because they are good to read and understand, but they're for optional fields. Some of the important and potentially confusing lines to look at are the package_dir and packages arguments. We've put our code into the src directory. We've set these two arguments and used the find_packages function from setuptools to automatically find the packages that we're providing when someone installs this. Building a Distribution Making code installable in Python means that we need to create a distribution. There are two primary types of distributions: eggs and wheels. Wheels are the modern way to create a distribution and they're a single file that can be installed by pip. They will install any dependencies and place or unpack the source code into the site-packages directory for our Python installation. For us to build a wheel distribution, we need to install the wheel package and run a command using Python and our setup.py file. Let's install wheel first.
$ pip3.7 install --upgrade wheel
...
Setuptools provides us with multiple different subcommands if we process our setup.py through the Python interpreter. Let's take a look at those commands.
$ python3.7 setup.py --help
Traceback (most recent call last):
  File "setup.py", line 7, in <module>
    with open(path.join(here, 'README.md'), encoding='utf-8') as f:
FileNotFoundError: [Errno 2] No such file or directory: '/home/cloud_user/using_modules/helpers/README.md'
Our setup.py specifies that we'll provide documentation in a README.md file, but that file doesn't exist, so we can't read it. We'll cover file IO later in the course, but for now, we just need to make sure that that file exists.
$ touch README.md
Now, let's try again.
$ python3.7 setup.py --help
Common commands: (see '--help-commands' for more)

  setup.py build      will build the package underneath 'build/'
  setup.py install    will install the package

Global options:
  --verbose (-v)      run verbosely (default)
  --quiet (-q)        run quietly (turns verbosity off)
  --dry-run (-n)      don't actually do anything
  --help (-h)         show detailed help message
  --no-user-cfg       ignore pydistutils.cfg in your home directory
  --command-packages  list of packages that provide distutils commands

Information display options (just display information, ignore any commands)
  --help-commands     list all available commands
  --name              print package name
  --version (-V)      print package version
  --fullname          print <package name>-<version>
  --author            print the author's name
  --author-email      print the author's email address
  --maintainer        print the maintainer's name
  --maintainer-email  print the maintainer's email address
  --contact           print the maintainer's name if known, else the author's
  --contact-email     print the maintainer's email address if known, else the
                      author's
  --url               print the URL for this package
  --license           print the license of the package
  --licence           alias for --license
  --description       print the package description
  --long-description  print the long package description
  --platforms         print the list of platforms
  --classifiers       print the list of classifiers
  --keywords          print the list of keywords
  --provides          print the list of packages/modules provided
  --requires          print the list of packages/modules required
  --obsoletes         print the list of packages/modules made obsolete

usage: setup.py [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...]
   or: setup.py --help [cmd1 cmd2 ...]
   or: setup.py --help-commands
   or: setup.py cmd --help
This gives us a lot of output, but only the common commands are provided to us. Reading the first line of the output, we can see that the rest of the commands can be shown by using --help-commands instead of --help. Let's do that.
$ python3.7 setup.py --help-commands
Standard commands:
  build             build everything needed to install
  build_py          "build" pure Python modules (copy to build directory)
  build_ext         build C/C++ extensions (compile/link to build directory)
  build_clib        build C/C++ libraries used by Python extensions
  build_scripts     "build" scripts (copy and fixup #! line)
  clean             clean up temporary files from 'build' command
  install           install everything from build directory
  install_lib       install all Python modules (extensions and pure Python)
  install_headers   install C/C++ header files
  install_scripts   install scripts (Python or otherwise)
  install_data      install data files
  sdist             create a source distribution (tarball, zip file, etc.)
  register          register the distribution with the Python package index
  bdist             create a built (binary) distribution
  bdist_dumb        create a "dumb" built distribution
  bdist_rpm         create an RPM distribution
  bdist_wininst     create an executable installer for MS Windows
  check             perform some checks on the package
  upload            upload binary package to PyPI

Extra commands:
  bdist_wheel       create a wheel distribution
  alias             define a shortcut to invoke one or more commands
  bdist_egg         create an "egg" distribution
  develop           install package in 'development mode'
  dist_info         create a .dist-info directory
  easy_install      Find/get/install Python packages
  egg_info          create a distribution's .egg-info directory
  install_egg_info  Install an .egg-info directory for the package
  rotate            delete older distributions, keeping N newest files
  saveopts          save supplied options to setup.cfg or other config file
  setopt            set an option in setup.cfg or another config file
  test              run unit tests after in-place build
  upload_docs       Upload documentation to PyPI

usage: setup.py [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...]
   or: setup.py --help [cmd1 cmd2 ...]
   or: setup.py --help-commands
   or: setup.py cmd --help
There are plenty of commands in here to play with, but the one that we care about is the extra command bdist_wheel. This will build a wheel distribution that will work perfectly with pip. Let's run that now.
$ python3.7 setup.py bdist_wheel
running bdist_wheel
running build
running build_py
creating build
creating build/lib
creating build/lib/helpers
copying src/helpers/__init__.py -> build/lib/helpers
copying src/helpers/strings.py -> build/lib/helpers
copying src/helpers/variables.py -> build/lib/helpers
installing to build/bdist.linux-x86_64/wheel
running install
running install_lib
creating build/bdist.linux-x86_64
creating build/bdist.linux-x86_64/wheel
creating build/bdist.linux-x86_64/wheel/helpers
copying build/lib/helpers/__init__.py -> build/bdist.linux-x86_64/wheel/helpers
copying build/lib/helpers/strings.py -> build/bdist.linux-x86_64/wheel/helpers
copying build/lib/helpers/variables.py -> build/bdist.linux-x86_64/wheel/helpers
running install_egg_info
running egg_info
writing src/helpers.egg-info/PKG-INFO
writing dependency_links to src/helpers.egg-info/dependency_links.txt
writing top-level names to src/helpers.egg-info/top_level.txt
reading manifest file 'src/helpers.egg-info/SOURCES.txt'
writing manifest file 'src/helpers.egg-info/SOURCES.txt'
Copying src/helpers.egg-info to build/bdist.linux-x86_64/wheel/helpers-1.0.0-py3.7.egg-info
running install_scripts
creating build/bdist.linux-x86_64/wheel/helpers-1.0.0.dist-info/WHEEL
creating 'dist/helpers-1.0.0-py3-none-any.whl' and adding 'build/bdist.linux-x86_64/wheel' to it
adding 'helpers/__init__.py'
adding 'helpers/strings.py'
adding 'helpers/variables.py'
adding 'helpers-1.0.0.dist-info/METADATA'
adding 'helpers-1.0.0.dist-info/WHEEL'
adding 'helpers-1.0.0.dist-info/top_level.txt'
adding 'helpers-1.0.0.dist-info/RECORD'
removing build/bdist.linux-x86_64/wheel
We now have a build and dist directory inside of the upper helpers directory. The artifact that we created will be within the dist directory and end with a .whl extension. Going back to ~/using_modules, we'll actually run into issues if we try to run main.py right now because there is no helpers package local to the file anymore. Here's what we'll see when we run that script:
$ cd ~/using_modules
$ python3.7 main.py
Traceback (most recent call last):
  File "main.py", line 1, in <module>
    from helpers.strings import extract_lower
ModuleNotFoundError: No module named 'helpers.strings'
To get around this, we'll install our package using pip and the wheel we built.
$ pip3.7 install helpers/dist/helpers-1.0.0-py3-none-any.whl
Processing ./helpers/dist/helpers-1.0.0-py3-none-any.whl
Installing collected packages: helpers
Successfully installed helpers-1.0.0
When we run a script or load the REPL, we can load the helpers package and its internal modules.
$ python3.7 main.py
Lowercase letters (from strings): ['e', 'i', 't', 'h', 'h', 'o', 'm', 'p', 's', 'o', 'n']
Uppercase letters (from package): ['K', 'T']
Off of helpers: ['e', 'i', 't', 'h', 'h', 'o', 'm', 'p', 's', 'o', 'n']
Our package is installed and our script runs again without using a module local to the script. We're not going to cover publishing a package to PyPi in this course, but the PyPA documentation also details how to do that.

Docstrings, Doctests, and Shebangs

00:13:56

Lesson Description:

Now that we've created both modules and packages, we should help the potential users of our code by adding some documentation. Additionally, it's a little cumbersome to continually pass our main.py script to the Python executable to run it, so we're going to turn that script into an executable to make using it a little easier. Documentation for This VideoPython Packages Documentation Python doctest ModuleDocumenting Python Code Using Docstrings In many languages, when we write documentation for our code, it exists in the source code as a comment. Python is a little different because the documentation exists in the code. This official type of documentation is done by adding docstrings to our modules at the top of the file, or within functions, methods, and classes. Docstrings are triple quoted strings (start with """ or ''') used to write multi-line, structured documentation. To add documentation to a package, we can add a docstring to the top of the package's __init__.py file. Let's add some documentation to the helpers package. ~/using_modules/helpers/src/helpers/__init__.py

"""
Helpers is a package that provides easy to use helper functions
and variables.
"""

__all__ = ["extract_upper"]

from .strings import *
One of the most common misconceptions in Python is that we just created a "block comment". That's entirely incorrect. We created a multi-line string and the interpreter has to do some work to read that content. An actual comment starts with an octothorp/hash/pound sign and the interpreter completely ignores it. In the very specific case of a docstring, this string will actually be assigned to a hidden variable on the package, module, function: the __doc__ variable. To demonstrate this, we're going to change how we installed our package so that it will pick up code changes as we write them. First, let's uninstall the existing helpers package.
$ pip3.7 uninstall -y helpers
Found existing installation: helpers 1.0.0
Uninstalling helpers-1.0.0:
  Successfully uninstalled helpers-1.0.0
We can install the package's source so that the changes we make will be available without a reinstall. This is handy in development, but not something we would have other users do.
$ cd ~/using_modules/helpers
$ pip3.7 install --editable .
Obtaining file:///home/cloud_user/using_modules/helpers
Installing collected packages: helpers
  Running setup.py develop for helpers
Successfully installed helpers
To see that our documentation is accessible in code, let's start the REPL, import our package, and access the __doc__ variable:
$ python3.7
Python 3.7.6 (default, Jan 30 2020, 15:46:02)
[GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import helpers
>>> helpers.__doc__
'nHelpers is a package that provides easy to use helper functionsnand variables.n'
Since modules are just Python files, we can do this same thing to document any module we write. To document a function we will create a triple-quoted string at the top of the function body. Let's write some documentation for extract_upper now. ~/using_modules/helpers/src/helpers/strings.py
def extract_upper(phrase):
    """
    extract_upper takes a string and returns a list containing
    only the uppercase characters from the string

    >>> extract_upper("Hello There, BOB")
    ['H', 'T', 'B', 'O', '']
    """
    return list(filter(str.isupper, phrase))

def extract_lower(phrase):
    return list(filter(str.islower, phrase))

if __name__ == "__main__":
    print("HELLO FROM HELPERS")
We've now created a docstring for a function. One of the downsides with documenting code is that it is pretty easy for the documentation and the code to get out of sync with one another, and bad documentation helps no one. Thankfully, docstrings can be used by another standard library module called doctest that allows us to add what looks like Python REPL lines into our docstrings that will then be evaluated to verify that they produce the expected results. Let's use the doctest module on our file to see if our documentation is accurate.
$ python3.7 -m doctest src/helpers/strings.py
**********************************************************************
File "src/helpers/strings.py", line 6, in strings.extract_upper
Failed example:
    extract_upper("Hello There, BOB")
Expected:
    ['H', 'T', 'B', 'O', '']
Got:
    ['H', 'T', 'B', 'O', 'B']
**********************************************************************
1 items had failures:
   1 of   1 in strings.extract_upper
***Test Failed*** 1 failures.
Our documentation is acting as an automated test and can now help us find regressions in our code and our documentation. In this case, the code works as intended, but there's a typo in the documentation that demonstrates how the code would be used. Let's fix that. ~/using_modules/helpers/src/helpers/strings.py
def extract_upper(phrase):
    """
    extract_upper takes a string and returns a list containing
    only the uppercase characters from the string

    >>> extract_upper("Hello There, BOB")
    ['H', 'T', 'B', 'O', 'B']
    """
    return list(filter(str.isupper, phrase))

def extract_lower(phrase):
    return list(filter(str.islower, phrase))

if __name__ == "__main__":
    print("HELLO FROM HELPERS")
If we run doctest again, we should see no output because the results match the expected outcome.
$ python3.7 -m doctest src/helpers/strings.py
$
Setting a Shebang for a Script The last thing we want to do is adjust main.py, so that we can run it directly. To do this, we need to do two things:Explicitly make it executable using chmod. Add a shebang to the top of the script so that the proper program will run the script.Shebangs are useful because they allow us to write scripts in languages other than our shell's language (bash, sh, zsh, etc.). For this to work, we need to add a reference to the executable to use at the top of the file in a special comment called a shebang. From the perspective of Python, a shebang starts like any other comment, but then immediately has an exclamation point. Let's set our script to use the default python executable that is currently active in our environment. ~/using_modules/main.py
#!/usr/bin/env python

from helpers.strings import extract_lower
from helpers import variables
from helpers import *
import helpers

print(f"Lowercase letters (from strings): {extract_lower(variables.name)}")
print(f"Uppercase letters (from package): {extract_upper(variables.name)}")
print(f"Off of helpers: {helpers.strings.extract_lower(variables.name)}")
If we make the script exectuable and run it, we should see the usual output without needing to pass it to the Python executable.
$ chmod +x ~/using_modules/main.py
$ ~/using_modules/main.py
Lowercase letters (from strings): ['e', 'i', 't', 'h', 'h', 'o', 'm', 'p', 's', 'o', 'n']
Uppercase letters (from package): ['K', 'T']
Off of helpers: ['e', 'i', 't', 'h', 'h', 'o', 'm', 'p', 's', 'o', 'n']
Using the env command followed by the executable we'd normally use is a good approach to setting a shebang for Python. If we want to be explicit about the version of Python to use, then we can use the absolute path. Using our pyenv-installed Python 3.7.6, we would use this path: ~/using_modules/main.py
#!/home/cloud_user/.pyenv/versions/3.7.6/bin/python

from helpers.strings import extract_lower
from helpers import variables
from helpers import *
import helpers

print(f"Lowercase letters (from strings): {extract_lower(variables.name)}")
print(f"Uppercase letters (from package): {extract_upper(variables.name)}")
print(f"Off of helpers: {helpers.strings.extract_lower(variables.name)}")
If we switch our Python back to the system Python and run main.py, it will still have access to the helpers package which is only installed for version 3.7.6.
$ pyenv shell system
$ python -V
Python 2.7.5
$ ~/using_modules/main.py
Lowercase letters (from strings): ['e', 'i', 't', 'h', 'h', 'o', 'm', 'p', 's', 'o', 'n']
Uppercase letters (from package): ['K', 'T']
Off of helpers: ['e', 'i', 't', 'h', 'h', 'o', 'm', 'p', 's', 'o', 'n']

Hands-on Labs are real live environments that put you in a real scenario to practice what you have learned without any other extra charge or account to manage.

00:45:00

Hands-on Labs are real live environments that put you in a real scenario to practice what you have learned without any other extra charge or account to manage.

00:30:00

Classes, Objects, and Exceptions

Classes and Object-Oriented Programming

What is an Object?

00:04:56

Lesson Description:

Python is an object-oriented programming language and that means we work primarily with objects. In this lesson, we'll take a closer look at what an object is. Documentation for This VideoPython Classes DocumentationWhat Is an Object? Objects can be a little confusing to think about, but a good way to think about objects is that they are entities encompassing data and functionality. Let's take a look at the built-in types we've been using to look at the data and functionality encompassed by them. Custom object types are more complex than the built-in types, but looking at the primitive types will help us understand objects from a high level. For strings (the str type), the primary data that we interact with is the string itself, but that doesn't mean it's the only value a string encompasses. When we talk about the functionality an object encompasses, we mean the methods that the object has access to. We've seen a lot of methods on strings, such as lower and upper. Thankfully, we can see everything an object encompasses by using the [dir][2] built-in function. Let's take a look at a string in the REPL:

$ python3.7
Python 3.7.6 (default, Jan 30 2020, 15:46:02)
[GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> my_str = "Test String"
>>> dir(my_str)
['__add__', '__class__', '__contains__', '__delattr__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattr
ibute__', '__getitem__', '__getnewargs__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__iter__', '__le__', '_
_len__', '__lt__', '__mod__', '__mul__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__rmod__', '__rm
ul__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', 'capitalize', 'casefold', 'center', 'count', 'encode',
'endswith', 'expandtabs', 'find', 'format', 'format_map', 'index', 'isalnum', 'isalpha', 'isascii', 'isdecimal', 'isdigit'
, 'isidentifier', 'islower', 'isnumeric', 'isprintable', 'isspace', 'istitle', 'isupper', 'join', 'ljust', 'lower', 'lstri
p', 'maketrans', 'partition', 'replace', 'rfind', 'rindex', 'rjust', 'rpartition', 'rsplit', 'rstrip', 'split', 'splitline
s', 'startswith', 'strip', 'swapcase', 'title', 'translate', 'upper', 'zfill']
The [dir function][2] returns a list of all of the variables and functions that the object encompasses. We can chain any of these items off of our object and it will return a value. That value might be a method.
>>> my_str.__doc__
"str(object='') -> strnstr(bytes_or_buffer[, encoding[, errors]]) -> strnnCreate a new string object from the given object. If encoding ornerrors is specified, then the object must expose a data buffernthat will be decoded using the given encoding and error handler.nOtherwise, returns the result of object.__str__() (if defined)nor repr(object).nencoding defaults to sys.getdefaultencoding().nerrors defaults to 'strict'."
>>> my_str.isdigit
<built-in method isdigit of str object at 0x7f7fc84c11f0>
>>> my_str.isdigit()
False
We can see the documentation for the str type and even access the methods from the object. A good way to tell if something is an object is to try to assign it to a variable. All objects can be assigned to variables.

Creating and Using Python Classes

00:13:50

Lesson Description:

The next step in our programming journey requires us to think about how we can model concepts from our problem's domain. To do that, we'll often use classes to create completely new data types. In this lesson, we'll create our very first class and learn how to work with its data and functionality. Python Documentation for This VideoClassesDefining New Types Up to this point, we've been working with the built-in types that Python provides (e.g. str, int, float), but when we're modeling problems in our programs we often want more complex objects that fit our specific problem's domain. For instance, if we were writing a program to model information about vehicles for an automotive shop, then it would make sense for us to have an object type that represents a vehicle. This is where we will start working with classes. From this point on, most of the code that we'll be writing will be in files. Let's create a python_objects directory to hold these files that are only there to facilitate learning.

$ mkdir ~/python_objects
$ cd ~/python_objects
Creating Our First Class For this lesson, we'll use a file called vehicle.py. Our goal is to model a vehicle that has tires and an engine. To create a class we use the class keyword, followed by a name for the class, starting with a capital letter. Let's create our first class, the Vehicle class: ~/python_objects/vehicle.py
class Vehicle:
    """
    Docstring describing the class
    """

    def __init__(self):
        """
        Docstring describing the method
        """
        pass
This is an incredibly simple class. A few things to note here are that by adding a triple-quoted string right under the definition of the class, and also right under the definition of a method or function, we can add documentation. This documentation is nice because we can add examples in this string to run as tests to help ensure our documentation stays up-to-date with the implementation. A method is a function defined within the context of an object, and Python classes can define special functions that start with double underscores __, such as the __init__ method. This method is the initializer for our class, and it is where we customize what happens when a new instance is being created. In practice, this method will usually just set attributes on the instance. The initializer is what is used when we create a new version of our class by running code like this:
>>> my_vehicle = Vehicle()
We would like our Vehicle class to hold a few pieces of data such as the tires and an engine. For the time being, we're going to have those be a list containing a string for the tires and a string for the engine. Let's modify our __init__ method to have the engine and tires parameters: ~/python_objects/vehicle.py
class Vehicle:
    """
    Vehicle models a vehicle w/ tires and an engine
    """

    def __init__(self, engine, tires):
        self.engine = engine
        self.tires = tires
What Is self? A big change from writing functions to writing methods is the presence of self. This variable references the individual instance of the class that we're working with. The Vehicle class holds onto the information about vehicles within our program, where an instance of the Vehicle class could represent a specific vehicle like my Honda Civic. Let's load our class into the REPL using python3.7 -i vehicle.py, and then we'll be able to create a Honda Civic.
$ python3.7 -i vehicle.py
>>> civic = Vehicle('4-cylinder', ['front-driver', 'front-passenger', 'rear-driver', 'rear-passenger'])
>>> civic.tires
['front-driver', 'front-passenger', 'rear-driver', 'rear-passenger']
>>> civic.engine
'4-cylinder'
Once we have our instance, we're able to access our internal attributes by using a period (.). Attributes are variables attached to the instance. Our civic variable has an engine attribute, which just means that engine is one of its internal variables. Defining a Custom Method The last thing that we'll do to round out the first rendition of our first class is to define a method that prints a description of the vehicle to the screen. ~/python_objects/vehicle.py
class Vehicle:
    """
    Vehicle models a vehicle w/ tires and an engine
    """

    def __init__(self, engine, tires):
        self.engine = engine
        self.tires = tires

    def description(self):
        print(f"A vehicle with an {self.engine} engine, and {self.tires} tires")
Our description method doesn't have any actual arguments, but we pass the instance in as self. From there, we can access the instance's attributes by calling self.ATTRIBUTE_NAME. Let's use this new method:
$ python3.7 -i vehicle.py
>>> honda = Vehicle('4-cylinder', ['front-driver', 'front-passenger', 'rear-driver', 'rear-passenger'])
>>> honda.engine
'4-cylinder'
>>> honda.tires
['front-driver', 'front-passenger', 'rear-driver', 'rear-passenger']
>>> honda.description
<bound method Vehicle.description of <__main__.Vehicle object at 0x7fb5f3fbbda0>>
>>> honda.description()
A vehicle with a 4-cylinder engine, and ['front-driver', 'front-passenger', 'rear-driver', 'rear-passenger'] tires
Just like a normal function, if we don't use parenthesis, the method won't execute. Adding and Removing Attributes from Instances We've seen how to define attributes as part of our instance initialization code, but an instance of a custom class also acts as a namespace for any attribute we want. This means that after we create an instance of a custom class, we can add attributes to it in the same way we assign a new variable. We just need to change it off of our instance's identifier. Let's add a serial_number to my honda.
>>> honda.serial_number = '1234'
>>> honda.serial_number
'1234'
We can remove attributes from an instance of a class using the del keyword, just like we would to delete a variable. Remember, we need to be accessing the attribute and not just pass in our object.
>>> del honda.serial_number
>>> honda.serial_number
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
AttributeError: 'Vehicle' object has no attribute 'serial_number'

Custom Constructors, Class Methods, and Decorators

00:09:15

Lesson Description:

There's a lot to learn when it comes to creating and building robust classes. In this lesson, we continue to learn about some of the tools at our disposal when creating classes: custom constructors and class methods. Documentation for This VideoClasses Class MethodsCustom Class Constructors Unlike other languages like Java, Python doesn't provide a way for us to create multiple constructor methods. Instead, we get a single constructor method that we can customize, the __init__ method. We've already customized this for our Vehicle class. This method has a default implementation that takes no arguments, so by defining this in our class, we've created a custom constructor. Using @classmethod to Create Convenience Constructor Methods Although creating multiple different constructors isn't a feature of Python, it doesn't mean we can't do something similar. If we want another way to construct a Vehicle object with some preset values, we can create convenience methods using what is known as a class method. A class method is a function attached to the class itself, not an instance of the class, and it has access to any class-level attributes. To create a class method, we need to use what is known as a decorator. Decorators are functions or classes that we use to add additional functionality to a function by prefixing the decorator's name with an at-sign (@) and putting it on the line above our function or method definition. This sounds confusing, but remember back to our look at higher-order functions. A decorator takes a function and returns another modified function in its place. For the purposes of the PCAP, we only need to know how to use one specific decorator so that we can add class methods to our classes: the @classmethod decorator. Let's add a method to our Vehicle class that will allow us to create a bicycle (which has two wheels, and no engine). ~/python_objects/vehicle.py

class Vehicle:
    """
    Vehicle models a vehicle w/ tires and an engine
    """
    default_tire = 'tire'

    def __init__(self, engine, tires):
        self.engine = engine
        self.tires = tires

    @classmethod
    def bicycle(cls, tires=None):
        if not tires:
            tires = [cls.default_tire, cls.default_tire]
        return cls(None, tires)

    def description(self):
        print(f"A vehicle with an {self.engine} engine, and {self.tires} tires")
Notice we added a class-level variable called default_tire. This variable is set on the class itself and will also be available to instances of the class. By decorating the bicycle as a @classmethod, we're able to call Vehicle.bicycle(), and the class itself will be passed in as the implicit cls argument (this name is a convention, not a required name). Because the class itself (Vehicle) is passed into the method as the cls variable, that means when we call cls(), it is equivalent to doing Vehicle() and will invoke the __init__ method. It's beneficial to use the cls variable instead of the class name, because if we ever change the name of the class, then we won't need to modify this function. If no argument is passed in for the tires parameter, then we'll create a default list containing the value of the default_tire class attribute two times. Let's load this file into the REPL and see if it works:
$ python3.7 -i vehicle.py
>>> bike = Vehicle.bicycle()
>>> bike
<__main__.Vehicle object at 0x7f947c0f7750>
>>> bike.description()
A vehicle with an None engine, and ['tire', 'tire'] tires
>>> bike.engine
>>> bike.tires
['tire', 'tire']
As we start modeling more and more concepts, there will be more situations where we'll want to use class methods to perform actions that require information available to only the class and doesn't require any instance information.

Inheritance and Super

00:15:12

Lesson Description:

Our Vehicle.bicycle class method does a good job of creating a vehicle that looks like a bicycle, but should we have a Bicycle class instead? Because a bicycle is a type of vehicle, we can leverage the code that exists in the Vehicle class by creating a new class that inherits from the Vehicle class. In this lesson, we'll learn about inheritance, one of the core tenants of object-oriented programming. Documentation for This VideoClasses Inheritance SuperUsing Inheritance to Customize an Existing Class Our existing Vehicle implementation does exactly what we need it to do for a general vehicle, but there are other, more specific types of vehicles such as cars, trucks, boats, and bicycles. If we wanted to model these other types of vehicles, we could use our existing Vehicle class as a starting point by inheriting its existing implementation. Let's add a new Bicycle class to a new file called bicycle.py. ~/python_objects/bicycle.py

from vehicle import Vehicle

class Bicycle(Vehicle):
    pass
By passing in the Vehicle class to our class definition for Bicycle, we're specifying that our class is a subclass of Vehicle. As it stands right now, the Bicycle class will behave exactly like the Vehicle type. From here, we can add more functionality and internal states specific to a bicycle. The convenience method we added to Vehicle essentially allows us to have a constructor that doesn't accept an engine, since a bicycle doesn't have an engine. That's what the constructor for Bicycle should do. Let's customize the initializer to do this. ~/python_objects/bicycle.py
from vehicle import Vehicle

class Bicycle(Vehicle):
    def __init__(self, tires=[]):
        if not tires:
            tires = [self.default_tire, self.default_tire]
        self.tires = tires
Because Bicycle is a subclass of Vehicle, it already has access to the class-level variable default_tire, so we don't need to redefine that to use it within the __init__ method. Let's use our class in the REPL to see if it is working correctly.
$ python -i bicycle.py
>>> bike = Bicycle()
>>> bike.tires
['tire', 'tire']
>>> custom_bike = Bicycle(['front-tire', 'back-tire'])
>>> custom_bike.description()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/cloud_user/python_objects/vehicle.py", line 18, in description
    print(f"A vehicle with an {self.engine} engine, and {self.tires} tires")
AttributeError: 'Bicycle' object has no attribute 'engine'
This error makes sense. Our bicycle doesn't have an engine. We're going to need to customize the description method to change the message. Looking at this error, does it make sense for Vehicle to require an engine and tires? Not really. A bicycle doesn't have an engine, and a boat doesn't have tires, so neither of those should be required. One piece of information that does describe a vehicle could be distance_traveled. Let's make Vehicle more abstract and have the description only print out the distance traveled. ~/python_objects/vehicle.py
class Vehicle:
    """
    Vehicle models a device that can be used to travel.
    """
    def __init__(self, distance_traveled=0, unit='miles'):
        self.distance_traveled = distance_traveled
        self.unit = unit

    def description(self):
        print(f"A {self.__class__.__name__} that has traveled {self.distance_traveled} {self.unit}")
Now our Vehicle class is much more generic and can be used as the parent class or base class for any more specific vehicle. We do need to change our Bicycle implementation now, and we will want to make sure that we're setting a distance_traveled and unit. Thankfully, we don't need to redo these lines, because we can use super. Let's break down the expressions self.__class__.__name__. The variable self is an instance of Vehicle in this case, but when this method is called from a subclass, then self will be an instance of that class instead. We want to display the name of the class in our description output, so we'll access the __name__ attribute on the class itself. That will provide us the string value. Using super() When we want to customize a method written on a parent class without entirely replacing the method, then we're able to invoke the parent class's implementation of the method by calling the super() function. We need to do this to change the Bicycle.__init__ method. A bicycle has tires, so we want that as another parameter in the initializer. Otherwise, we would like to have the initialization behave the same way as it does for Vehicle. Let's implement __init__. ~/python_objects/bicycle.py
from vehicle import Vehicle

class Bicycle(Vehicle):
    default_tire = 'tire'

    def __init__(self, tires=[], distance_traveled=0, unit='mile'):
        super().__init__(distance_traveled, unit)
        if not tires:
            tires = [self.default_tire, self.default_tire]
        self.tires = tires
By calling super(), we have access to the methods implemented in our parent class, Vehicle. We'll then call the __init__ method with the proper parameters. The self, in the context of this call to __init__, is our Bicycle instance. So, this method call will set distance_traveled and unit on our Bicycle class. We're leveraging code from the parent class while adding a little more to the initialization of this new class. Let's take this into the REPL to see how it works.
$ python3.7
Python 3.7.6 (default, Jan 30 2020, 15:46:02)
[GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from vehicle import Vehicle
>>> from bicycle import Bicycle
>>> vehicle = Vehicle()
>>> bike = Bicycle()
>>> vehicle.description()
A Vehicle that has traveled 0 miles
>>> bike.description()
A Bicycle that has traveled 0 mile
As we can see, description outputs something different for the Bicycle without us even changing it because we wrote it in a generic, context-aware way. That being said, we would like to add information about our tires to the description too, and once again this is a situation where we can leverage super. To make this a little quicker to test, we'll also add a __name__ == "__main__" condition with some code to demonstrate what we're writing. ~/python_objects/bicycle.py
from vehicle import Vehicle

class Bicycle(Vehicle):
    default_tire = 'tire'

    def __init__(self, tires=[], distance_traveled=0, unit='mile'):
        super().__init__(distance_traveled, unit)
        if not tires:
            tires = [self.default_tire, self.default_tire]
        self.tires = tires

    def description(self):
        initial = super().description()
        return f"{initial} on {len(self.tires)} tires."

if __name__ == "__main__":
    bike = Bicycle()
    print(bike.description())
It doesn't really make sense for the call to description to print the information. That just makes it hard to work with. What we've written here works a little better and allows us to control when we're using the class that is going to be printed. We want the initial string provided by Vehicle, and then we'll customize it to show a little information about how many tires we have. Because of how Vehicle is currently written, this won't quite work the way we would like.
$ python3.7 bicycle.py
A Bicycle that has traveled 0 mile
None on 2 tires.
Let's fix this by making Vehicle.description return a string rather than print a message. ~/python_objects/vehicle.py
class Vehicle:
    """
    Vehicle models a device that can be used to travel.
    """
    def __init__(self, distance_traveled=0, unit='miles'):
        self.distance_traveled = distance_traveled
        self.unit = unit

    def description(self):
        return f"A {self.__class__.__name__} that has traveled {self.distance_traveled} {self.unit}"
Let's run bicycle.py one last time.
$ python3.7 bicycle.py
A Bicycle that has traveled 0 miles on 2 tires.
We've learned quite a bit in this lesson about inheritance and super, but also how not to design our objects. Sometimes our initial thoughts about our objects are just not right and can make working with the items harder than we originally imagined.

Single and Multiple Inheritance

00:17:38

Lesson Description:

Sometimes we have an object that makes sense to be a subclass of more than one other type. In these situations, we can use what is called multiple inheritance. Multiple inheritance is not something we use too often, but it is good to understand how it works. We'll learn all about it in this lesson. Documentation for This VideoClasses Inheritance Multiple Inheritance SuperMultiple Inheritance Multiple inheritance allows us to inherit from multiple parent classes. This can be used to pull functionality from multiple different classes into a single class. To demonstrate multiple inheritance, we're going to continue modeling vehicles starting with new classes for a Car and a Boat. These classes are very similar to what we did with Bicycle, so we'll quickly create these classes without much additional comment. ~/python_objects/car.py

class Car(Vehicle):
    default_tire = 'tire'

    def __init__(self, engine, tires=[], distance_traveled=0, unit='miles'):
        super().__init__(distance_traveled, unit)
        if not tires:
            tires = [self.default_tire, self.default_tire]
        self.tires = tires
        self.engine = engine

    def drive(self, distance):
        self.distance_traveled += distance
~/python_objects/boat.py
class Boat(Vehicle):
    def __init__(self, boat_type='sail', distance_traveled=0, unit='miles'):
        super().__init__(distance_traveled, unit)
        self.boat_type = boat_type

    def voyage(self, distance):
        self.distance_traveled += distance

    def description(self):
        initial = super().description()
        return f"{initial} using a {self.boat_type}"
We can model another type of vehicle, called AmphibiousVehicle, by inheriting from both a Car and a Boat. This is a type of vehicle that is both a car (so it can travel on land) and a boat (so it can travel through the water). To use multiple inheritance, we separate our parent classes with a comma in the same way that we would with function parameters. Here's the initial version of our AmphibiousVehicle (in amphibious_vehicle.py): ~/python_objects/amphibious_vehicle.py
from boat import Boat
from car import Car

class AmphibiousVehicle(Car, Boat):
    pass
Let's load this class into the REPL to see what it attempts to do when we initialize a new one.
$ python -i amphibious_vehicle.py
>>> water_car = AmphibiousVehicle()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: __init__() missing 1 required positional argument: 'engine'
>>>
By default, our class will try to use the method from the first class in the list of classes (Car) that we inherit when we don't customize our method. We're going to want to customize __init__ to explicitly set our boat_type to 'motor' and then continue with our initialization using the code from the Car class. ~/python_objects/amphibious_vehicle.py
from boat import Boat
from car import Car

class AmphibiousVehicle(Car, Boat):
    def __init__(self, engine, tires=[], distance_traveled=0, unit='miles'):
        super().__init__(engine, tires, distance_traveled, unit)
        self.boat_type = 'motor'

    def travel(self, land_distance=0, water_distance=0):
        self.voyage(water_distance)
        self.drive(land_distance)
Let's try this again to see what happens.
$ python -i amphibious_vehicle.py
>>> water_car = AmphibiousVehicle('4 cylinder')
>>> water_car.description()
'A AmphibiousVehicle that has traveled miles miles using a motor'
There are some issues here. First, AmphibiousVehicle isn't quite what we're going for when we print this out. Additionally, our distance_traveled attribute is apparently being set to 'miles' instead of 0. To understand what is going on, we need to get a better understanding of the method resolution order when calling super when using multiple inheritance. Method Resolution Order Method resolution order is a term for looking at how methods on an object are found and which ones are run. Thankfully, we can see the method resolution order (i.e. "MRO") by accessing the __mro__ attribute on our AmphibiousVehicle class.
>>> AmphibiousVehicle.__mro__
(<class '__main__.AmphibiousVehicle'>, <class 'car.Car'>, <class 'boat.Boat'>, <class 'vehicle.Vehicle'>, <class 'object'>)
This shows us that we will run what is found in AmphibiousVehicle first, Car second, Boat third, Vehicle fourth, and object last (object is the default type that we inherit when we create a class). What this list doesn't tell us is that when we call super, it will call the method in all of our parent classes that implement it. To show this, let's add some print lines into our parent class __init__ functions. ~/python_objects/boat.py
from vehicle import Vehicle

class Boat(Vehicle):
    def __init__(self, boat_type='sail', distance_traveled=0, unit='miles'):
        print(f"__init__ from Boat with distance_traveled: {distance_traveled} and {unit}")
        super().__init__(distance_traveled, unit)
        self.boat_type = boat_type

    def voyage(self, distance):
        self.distance_traveled += distance

    def description(self):
        initial = super().description()
        return f"{initial} using a {self.boat_type}"
~/python_objects/car.py
from vehicle import Vehicle

class Car(Vehicle):
    default_tire = 'tire'

    def __init__(self, engine, tires=[], distance_traveled=0, unit='miles'):
        print(f"__init__ from Car with distance_traveled: {distance_traveled} and {unit}")
        super().__init__(distance_traveled, unit)
        if not tires:
            tires = [self.default_tire, self.default_tire]
        self.tires = tires
        self.engine = engine

    def drive(self, distance):
        self.distance_traveled += distance

~/python_objects/vehicle.py
class Vehicle:
    """
    Vehicle models a device that can be used to travel.
    """
    def __init__(self, distance_traveled=0, unit='miles'):
        print(f"__init__ from Vehicle with distance_traveled: {distance_traveled} and {unit}")
        self.distance_traveled = distance_traveled
        self.unit = unit

    def description(self):
        return f"A {self.__class__.__name__} that has traveled {self.distance_traveled} {self.unit}"
Now if we initialize a new AmphibiousVehicle, we should get more insight into how distance_traveled is being set to 'miles'.
$ python -i amphibious_vehicle.py
>>> water_car = AmphibiousVehicle('4-cylinder')
__init__ from Car with distance_traveled: 0 and miles
__init__ from Boat with distance_traveled: miles and miles
__init__ from Vehicle with distance_traveled: miles and miles
As we can see, we call super one time, and yet both Car and Boat have their __init__ methods run. This is a little confusing because it's not that we're running both methods from AmphibiousVehicle. It's that Car.__init__ also calls super. Because self is an AmphibiousVehicle at the moment that super is called from Car, it calls __init__ from the next object in the method resolution order, which is Boat. We're then calling Boat.__init__ with only distance_traveled and unit as positional arguments. If we run Boat(0, 'miles'), this will give us a Boat with distance_traveled set to 'miles'. How do we get around this? By adjusting our objects to be more flexible by using kwargs to capture extra keyword arguments and explicitly use keyword arguments when calling super().__init__. ~/python_objects/amphibious_vehicle.py
from boat import Boat
from car import Car

class AmphibiousVehicle(Car, Boat):
    def __init__(self, engine, tires=[], distance_traveled=0, unit="miles"):
        super().__init__(
            engine=engine, tires=tires, distance_traveled=distance_traveled, unit=unit
        )
        self.boat_type = "motor"

    def travel(self, land_distance=0, water_distance=0):
        self.voyage(water_distance)
        self.drive(land_distance)
~/python_objects/boat.py
from vehicle import Vehicle

class Boat(Vehicle):
    def __init__(self, boat_type="sail", distance_traveled=0, unit="miles", **kwargs):
        super().__init__(distance_traveled=distance_traveled, unit=unit, **kwargs)
        self.boat_type = boat_type

    def voyage(self, distance):
        self.distance_traveled += distance

    def description(self):
        initial = super().description()
        return f"{initial} using a {self.boat_type}"
~/python_objects/car.py
from vehicle import Vehicle

class Car(Vehicle):
    default_tire = "tire"

    def __init__(self, engine, tires=[], distance_traveled=0, unit="miles", **kwargs):
        super().__init__(distance_traveled=distance_traveled, unit=unit, **kwargs)
        if not tires:
            tires = [self.default_tire, self.default_tire]
        self.tires = tires
        self.engine = engine

    def drive(self, distance):
        self.distance_traveled += distance
~/python_objects/vehicle.py
class Vehicle:
    """
    Vehicle models a device that can be used to travel.
    """

    def __init__(self, distance_traveled=0, unit="miles", **kwargs):
        self.distance_traveled = distance_traveled
        self.unit = unit

    def description(self):
        return f"A {self.__class__.__name__} that has traveled {self.distance_traveled} {self.unit}"
Now that our initialization methods are more flexible, and we're being more explicit, let's try this again to see if our attributes are set properly.
$ python -i amphibious_vehicle.py
>>> water_car = AmphibiousVehicle('4 cylinder')
>>> water_car.description()
'A AmphibiousVehicle that has traveled 0 miles using a motor'
>>> water_car.travel(10, 15)
>>> water_car.description()
'A AmphibiousVehicle that has traveled 25 miles using a motor'
We've finally been able to use our class properly and call the travel method. Because Boat implements voyage, and Car implements drive, we're able to call each of those methods using super. They will be dispatched to the proper parent class. This shows one of the important things to consider when working with multiple inheritance. Things work well if our parent classes don't implement the same method names, but it can become a headache to debug issues when multiple classes implement the same method and also call super.

Upcoming Lesson: Name Mangling

Lesson Description:

Python doesn't really have the concept of private classes or instance variables. Data on an object can always be accessed explicitly, but by following some conventions, we can utilize a feature called name mangling to ensure that private data on a parent class isn't overwritten if the subclass also has a variable with the same name. Documentation for This VideoClasses Private VariablesWhat Is Name Mangling? Name mangling is something that allows the interpreter to replace special identifiers in our classes with ones that are specific to the class they're written in. This isn't something we'll leverage often, but the official tutorial has a snippet of code that demonstrates this really well. Let's create a file called mapping.py to work with this example code (which can be found here). ~/python_objects/mapping.py

class Mapping:
    def __init__(self, iterable):
        self.items_list = []
        self.__update(iterable)

    def update(self, iterable):
        for item in iterable:
            self.items_list.append(item)

    __update = update   # private copy of original update() method

class MappingSubclass(Mapping):
    def update(self, keys, values):
        # provides new signature for update()
        # but does not break __init__()
        for item in zip(keys, values):
            self.items_list.append(item)
There are some comments in this code that roughly explain what is going on, but the important points are this:In Mapping, we're keeping a reference to the original version of update as __update, so that we can use it within the __init__ method. By doing this, we'll always use the original version of the method even if a subclass implements a different version of update (as the MappingSubclass does). The name mangling aspect of things is that __update becomes _Mapping__update, even if we add an __update identifier to MappingSubclass.Let's add an __update identifier to MappingSubclass before we take a look at what is going on in the REPL. ~/python_objects/mapping.py
class Mapping:
    def __init__(self, iterable):
        self.items_list = []
        self.__update(iterable)

    def update(self, iterable):
        for item in iterable:
            self.items_list.append(item)

    __update = update  # private copy of original update() method


class MappingSubclass(Mapping):
    def update(self, keys, values):
        # provides new signature for update()
        # but does not break __init__()
        for item in zip(keys, values):
            self.items_list.append(item)

    def print_something(self):
        print("Printing something")

    __update = print_something
Let's load this into the REPL and see the identifiers that exist on our classes.
$ python -i mapping.py
>>> Mapping.__update
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
AttributeError: type object 'Mapping' has no attribute '__update'
>>> dir(Mapping)
['_Mapping__update', '__class__', '__delattr__', '__dict__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__le__', '__lt__', '__module__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', 'update']
>>> dir(MappingSubclass)
['_MappingSubclass__update', '_Mapping__update', '__class__', '__delattr__', '__dict__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__le__', '__lt__', '__module__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', 'print_something', 'update']
Notice that Mapping has a _Mapping__update identifier, but not __update, and MappingSubclass has both _MappingSubclass__update and _Mapping__update. These are the rules for creating an identifier that will be name mangled:The name starts with at least two underscores (_). The name has at most one trailing underscore (_). The identifier must be defined in the definition of the class at the same level as methods.

Inspecting Objects

00:14:32

Lesson Description:

As we work with more and more custom classes, we will need to start inspecting the variables we have to see what information we have to work with. We won't be able to keep the information about all of the types in our systems in our minds after a certain point, and knowing the tools we can use to get more information from our objects is very useful. In this lesson, we'll learn about various built-in functions, methods, and attributes that we can use to get more information about classes and objects we're working with. Documentation for This VideoClasses Special Attirbutes Basic Object Customization The __str__ Method The type Function The hasattr Function The issubclass Function The isinstance FunctionInspecting Instances and Classes There are two main things we'll want to get more information about when we're doing object-oriented programming:The classes The instances of those classesBy learning more about the classes that we'll need to work with, we can have a better idea of how they were intended to be used. By learning more about the instances created as code where our system is running, we can better debug and understand what is going on as we interact with the objects. Let's start by taking a look at how we can learn more about a class by loading our amphibious_vehicle.py class into the REPL and taking a look at some of the "private" attributes and methods on the class.

$ python3.7 -i amphibious_vehicle.py
>>> AmphibiousVehicle.__bases__
(<class 'car.Car'>, <class 'boat.Boat'>)
>>> from vehicle import Vehicle
>>> Vehicle.__subclasses__()
[<class 'boat.Boat'>, <class 'car.Car'>]
>>> dir(AmphibiousVehicle)
['__class__', '__delattr__', '__dict__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__le__', '__lt__', '__module__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', 'default_tire', 'description', 'drive', 'travel', 'voyage']
Notice that __bases__ doesn't show up when we pass our class to the dir function. Some of the special attributes that exist don't show up when using dir, and we just need to know them. We also have some functions we can use to get information about our classes:The hasattr function: Takes an object and a string with the name of the identifier we'd like to check for. It's worth noting that if we pass in the class, it will check for class-level attributes, not instance-level attributes.
>>> from boat import Boat
>>> hasattr(Boat, 'boat_type')
False
>>> from car import Car
>>> hasattr(Car, 'default_tire')
True
The issubclass function: Checks to see if the first class passed in is a subclass of the second class. The order is important.
>>> from vehicle import Vehicle
>>> issubclass(Boat, Vehicle)
True
>>> issubclass(Boat, AmphibiousVehicle)
False
>>> issubclass(AmphibiousVehicle, Boat)
True
The isinstance function: Checks to see if an object is an instance of the given class. Note that an object is an instance of its class's subclasses.
>>> from bicycle import Bicycle
>>> water_car = AmphibiousVehicle('4 cylinder')
>>> isinstance(water_car, Bicycle)
False
>>> isinstance(water_car, AmphibiousVehicle)
True
>>> isinstance(water_car, Boat)
True
The __dict__ attribute: Returns a dictionary (or dictionary-like object) containing all of the custom (i.e. writable) attributes on the object. This can be used on both classes and instances of classes. The result for a class is a bit weird looking, but notice that it only contains the methods and attributes we defined.
>>> water_car.__dict__
{'distance_traveled': 0, 'unit': 'miles', 'boat_type': 'motor', 'tires': ['tire', 'tire'], 'engine': '4 cylinder'}
>>> Boat.__dict__
mappingproxy({'__module__': 'boat', '__init__': <function Boat.__init__ at 0x7ff9228b9f80>, 'voyage': <function Boat.voyage at 0x7ff9228b9dd0>, 'description': <function Boat.description at 0x7ff922835050>, '__doc__': None})
The type function: Returns the class used to create the object.
>>> type(water_car)
<class '__main__.AmphibiousVehicle'>
Notice that the class is __main__.AmphibiousVehicle. This shows the value of the __module__ attribute for the class and then the class name. Normally, this will not be __main__, it would be the module that defines the class. It's __main__ right now because we launched the REPL using python3.7 -i amphibious_vehicle.py. That means it interpreted the file, effectively running those lines in the REPL itself. If we access the __module__ attribute on a different class, we will see the name of the defining module.
```
>>> Boat.__module__
'boat'
```
Customizing Objects with __str__ In addition to being able to get information from classes and instances that we're working with, we can also make our class instances present their information in a better way for various situations. The primary situation where we'll customize our object's behavior is when it's converted to a string. Let's take a look at what an AmphibiousVehicle looks like when converted to a string or returned.
>>> str(water_car)
'<__main__.AmphibiousVehicle object at 0x7ff92
283a6d0>'
This is not super helpful, but we can customize this output by defining the __str__ method. Let's define this method to return the class name and the attributes currently on the instance, using the __dict__ attribute. We'll also add a main section so that we can quickly test what this output will look like. ~/python_objects/amphibious_vehicle.py
from boat import Boat
from car import Car

class AmphibiousVehicle(Car, Boat):
    def __init__(self, engine, tires=[], distance_traveled=0, unit="miles"):
        super().__init__(
            engine=engine, tires=tires, distance_traveled=distance_traveled, unit=unit,
        )
        self.boat_type = "motor"

    def travel(self, land_distance=0, water_distance=0):
        self.voyage(water_distance)
        self.drive(land_distance)

    def __str__(self):
        return f"<{self.__class__.__name__} {self.__dict__}>"

if __name__ == "__main__":
    water_car = AmphibiousVehicle('4 cylinder')
    print(water_car)
Let's run this file to see our newly-configured string output.
$ python3.7 amphibious_vehicle.py
<AmphibiousVehicle {'distance_traveled': 0, 'unit': 'miles', 'boat_type': 'motor', 'tires': ['tire', 'tire'], 'engine': '4 cylinder'}>
We wouldn't want to drop that into a message printed to end-users of our code, but this makes print debugging way more informative than seeing the location for the object in memory.

Hands-on Labs are real live environments that put you in a real scenario to practice what you have learned without any other extra charge or account to manage.

00:45:00

Hands-on Labs are real live environments that put you in a real scenario to practice what you have learned without any other extra charge or account to manage.

00:45:00

Input and Output Basics

Interacting with Files

00:15:58

Lesson Description:

There are some core actions that we need to understand how to do in any programming language, in order to be very productive. One of these actions is interacting with files. In this lesson, we'll learn how to read from and write to files, and we'll take a look at how bytes can be represented in code. Documentation For This VideoThe open function The file object The io module Bytes ObjectsFiles as Objects One of the beautiful aspects of working in an object-oriented programming language is that we can represent concepts as objects with functionality. Files are a great use case for this. Python gives us the file object (or concept, really). These objects provide us a few things:A read method to access the underlying data in the file A write method to place data into the underlying fileTo test this out, we're going to create a simple text file with some names it in, and then read and modify it to see what we can learn. Opening a File The first step to interacting with a file is to "open" it, and in Python, we'll use the open function. This function takes two main arguments:file - The path to the file on disk (or where you'd like to create it) mode - How you would like to interact with the fileThe file argument is pretty simple, but the mode argument has a variety of options that all work a little differently:'r' - Opens the file for reading, which is the default mode 'w' - Opens the file for writing, while removing the existing content (truncating the file) 'x' - Opens the file to create it, failing if the file already exists 'a' - Opens the file for writing without truncating, appending any new writes to the end of the file 'b' - Opens the file in binary mode, in which the file expects to write and return bytes objects 't' - Opens the file in text mode, the default mode, where the object expects to write and return strings '+' - Opens the file for reading and writingThese modes can be used in combination, so w+b is a valid mode saying that we want to read and write with bytes, and with the existing file being truncated (from the w). Let's create a new script called using_files.py (within a new directory called file_io), and we'll start interacting with a file containing some names. The file doesn't exist yet, but if it did, we'd like to truncate it and prepare to write to it. ~/file_io/using_files.py

my_file = open('xmen.txt', 'w+')
Now we have a new file object that we can write to. Writing to the File Before we can read from our file, we need it to have some content. There are a few primary methods that we'll interact with this depending on whether or not we want to work with lines or individual characters. The write method only writes the characters that we specify, where the writelines method takes a list of strings that should all be on their own line. Let's add some names to our file, each on its own line, using both methods: ~/file_io/using_files.py
my_file = open('xmen.txt', 'w+')
my_file.write('Beastn')
my_file.write('Phoenixn')
my_file.writelines([
    'Cyclops',
    'Bishop',
    'Nightcrawler',
])
Let's save the file, run it, and then check the contents of xmen.txt:
$ python3.7 using_files.py
$ cat xmen.txt
Beast
Phoenix
CyclopsBishopNightcrawler
$
This isn't quite what we expected. You would probably think that writelines would add the line ending, but the truth is that we still need to add the 'n' to the end of each item. The writelines method is more of a shorthand for multiple calls to write unless we used newline='n' when we opened the file. Another thing that we didn't do is close the file. When we're finished working with a file, we should call the close method. It's not necessary when running this script because the filehandle will be closed when the program terminates. But when we're interacting with files, from within a server, for instance, the program won't terminate for a long time. Reading from a File Now that we have some content in the file, let's close it within the script and then re-open it for reading. ~/file_io/using_files.py
my_file = open('xmen.txt', 'w+')
my_file.write('Beastn')
my_file.write('Phoenixn')
my_file.writelines([
    'Cyclopsn',
    'Bishopn',
    'Nightcrawlern',
])
my_file.close()

my_file = open('xmen.txt', 'r')
print(my_file.read())
my_file.close()
Now we can run the script again to see what happens:
$ python3.7 using_files.py
Beast
Phoenix
Cyclops
Bishop
Nightcrawler

$
Since we're reading the file in 'text' mode, we'll receive a single string from the read method that contains the newline characters, and when printed, it will print the newlines accordingly. If we didn't want this parsing to occur, we could work with the file in bytes mode. If we were to call the read method again, we would receive an empty string in response. The reason for this is that the file holds onto a cursor for the location that it's currently in the file, and when we read, it returns everything after that cursor position and moves the cursor to the end. To reread the existing content, we'll need to use seek to move earlier in the file. Additionally, as an alternative to read we can use readlines to return a list of lines in the file if that makes working with the data a little easier. Here's an example of both of these things in action. ~/file_io/using_files.py
my_file.write('Beastn')
my_file.write('Phoenixn')
my_file.writelines([
    'Cyclopsn',
    'Bishopn',
    'Nightcrawlern',
])

my_file.seek(0)
my_file.write('Morph')
my_file.seek(0)
for line in my_file.readlines():
    print(line)

my_file.close()
This would output:
$ python3.7 using_files.py
Morph

Phoenix

Cyclops

Bishop

Nightcrawler

$
The with Statement Remembering to close files that we opened can be tedious, and to get around this, Python gives us the with statement. A with statement takes an object that has a close method and will call that method after the block has run. Let's rewrite our existing code to utilize the with statement: ~/file_io/using_files.py
with open('xmen.txt', 'w+') as my_file:
    my_file.write('Beastn')
    my_file.write('Phoenixn')
    my_file.writelines([
        'Cyclopsn',
        'Bishopn',
        'Nightcrawlern',
    ])

my_file = open('xmen.txt', 'r')
with my_file:
    print(my_file.read())
When we open the file to write, we're using the shorthand as expression to open the file within the with statement, and assigning it to the variable my_file within the block. This is a really handy tool if we don't need to use the file in any other way. An alternative would be to create the my_file variable manually and then pass the variable into the with statement like we did when we were reading from the file.

Working with Bytes

00:11:41

Lesson Description:

Now that we know how to work with files at a base level, we need to know how to work with bytes and bytearray objects since they're so closely related to file IO. Documentation For This VideoThe open function The file object The io module Binary Sequence Types Bytes Objects Bytearray ObjectsWhat is a bytes Object? A bytes object is an immutable sequence (just like a string) that consists of single bytes of data. Where a string is a sequence of characters, a bytes object is a sequence of bytes. There's also a type called bytearray that is a mutable counterpart of the bytes. A "byte" is an integer from 0 to 256, but we will see them represented using ASCII characters and hexadecimal numbers. Let's create our first bytes object in the REPL:

>>> my_bytes = b"This is a byte"
>>> my_bytes
b'This is a byte'
The b prefix allows us to create a bytes object literal, but we can also use the bytes() constructor method:
>>> bytes(b"This is a byte")
b'This is a byte'
>>> bytes(10)
b'x00x00x00x00x00x00x00x00x00x00'
>>> bytes(range(10))
b'x00x01x02x03x04x05x06x07x08t'
What is x00? A 2 character hexadecimal number encompasses all possible values for an individual byte (0-256). The bytes object will show its representation using 2 digit hexadecimal values unless the value translates to an ASCII character like x09 being t. A big difference between strings and bytes is what is returned when you index the item versus slicing. For a string, you always get a string returned, even for a single index. For bytes, indexing will return an integer; slicing will return a bytes object.
>>> my_bytes
b'This is a byte'
>>> my_bytes[0]
84
>>> my_bytes[0:2]
b'Th'
Bytearrays There isn't a literal syntax for creating a bytearray object; we need to use the bytearray constructor, which works just like the bytes constructor except that it can also take a bytes object as the argument to create a mutable version.
>>> bytearray()
bytearray(b'')
>>> bytearray(10)
bytearray(b'x00x00x00x00x00x00x00x00x00x00')
>>> bytearray(range(10))
bytearray(b'x00x01x02x03x04x05x06x07x08t')
>>> bytearray(b'Bytes')
bytearray(b'Bytes')
Because bytearrays are mutable, we can use the same assignment operations that we would normally with a list, except that we need to remember that when working with an index, we need to pass in an integer value and when replacing a slice we need to pass in a bytes object.
>>> b_array = bytearray(10)
>>> b_array[0] = b'T'
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: 'bytes' object cannot be interpreted as an integer
>>> b_array[0:1] = b'T'
>>> b_array
bytearray(b'Tx00x00x00x00x00x00x00x00x00')
>>> b_array[1] = 0x10
>>> b_array
bytearray(b'Tx10x00x00x00x00x00x00x00x00')
Using bytes Mode for Files To use the bytes mode when reading a file, we'll need to specify whether we're going to read, write, or append and then also add the b mode. Let's open our xmen.txt file in bytes mode:
>>> with open('xmen.txt', 'rb') as my_file:
...     my_file.read()
...
b'BeastnPhoenixnCyclopsnBishopnNightcrawlern'
>>> with open('xmen.txt', 'rb') as my_file:
...     my_file.readlines()
...
[b'Beastn', b'Phoenixn', b'Cyclopsn', b'Bishopn', b'Nightcrawlern']
The main difference between text mode and bytes mode is that we need to use bytes objects when writing, and we'll receive bytes objects when reading. Reading into Bytearrays Because bytearrays are mutable, we can create them specifying the size and then read that exact amount of information from a file to place into the bytearray.
>>> my_file = open('xmen.txt', 'rb')
>>> b_array = bytearray(10)
>>> my_file.readinto(b_array)
10
>>> b_array
bytearray(b'BeastnPhoe')
An alternative way to create a bytearray with a specific length and content would be to call read with the length argument that we hadn't used yet and then passing that value into the bytearray constructor:
>>> new_b_array = bytearray(my_file.read(10))
>>> new_b_array
bytearray(b'nixnCyclop')
This isn't something that is used all that often, but there's a chance that you might see questions about reading into bytearray objects on the PCAP exam.

Hands-on Labs are real live environments that put you in a real scenario to practice what you have learned without any other extra charge or account to manage.

00:45:00

Exceptions and Exception Handling

What are Exceptions?

00:04:40

Lesson Description:

Things don't always go according to plan when we're programming, and when issues arise, we run into exceptions. In this lesson, we'll learn about what exceptions are and when we'll run into them. Documentation For This VideoPython Errors and Exceptions Documentation Python Exceptions DocumentationSyntax Errors vs Exceptions As we've been learning how to do various things with Python, we've run into both syntax errors and exceptions. There's a subtle difference between the two: syntax errors cannot be recovered from. The reason that we can't recover from a syntax error is that our code is simply not valid Python code, so the parser doesn't know what to do with anything after it runs into the error. Exceptions are issues that occur during the execution of syntactically valid code that prevents the code from executing as planned. An exception occurring doesn't necessarily mean that our program needs to fail; it might just mean that we need to do something differently. An example of an exception that we can handle is a TypeError. This type of exception that we would run into if we tried to add an int and a str like this:

>>> 1 + 'a'
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: unsupported operand type(s) for +: 'int' and 'str'
This might seem like something that we would never do, but say we had code that tried to add two different variables, without using type checking or type casting we can't know for certain that two random variables both hold onto types that can be added, so it is possible to run into a TypeError in that situation. As we write more and more complex code, there will be times when we need to handle errors, raise errors ourselves, and even create custom error types.

Handling Exceptions with `try`, `except`, `else`, and `finally`

00:10:07

Lesson Description:

Not everything can go according to plan in our programs, but we should know when these scenarios arise and handle them appropriately. In this lesson, we'll take a look at how to handle exceptions in Python. Documentation For This VideoThe try statement & workflow Python Errors and Exceptions Documentation Python Exceptions DocumentationHandling Exceptions with try/except/else/finally When we know that our code can raise an exception, we don't need to just accept it and let our program crash. We can handle these exceptions using the try statement. This is a compound statement kind of like the if statement where we will also need to use except, and have access to else and finally. Let's break down what these do by writing a small program that will potentially raise an exception. We'll call this program using_try.py and put it in a new directory called exception_handling:

$ ~/exception_handling
$ touch ~/exception_handling/using_try.py
~/exception_handling/using_try.py
import sys

print(f"Received argument {sys.argv[1]}")
If we run this script with an extra argument, then it will run successfully, but if we run it without any arguments, then we'll see an exception.
$ python3.7 using_try.py testing
Received argument testing
$ python3.7 using_try.py
Traceback (most recent call last):
  File "using_try.py", line 3, in <module>
    print(f"Received argument {sys.argv[1]}")
IndexError: list index out of range
The exception is an IndexError, and it's being raised because the list returned from sys.argv doesn't have 2 elements, so there is no index of 1. This is a completely reasonable thing for someone to do when using our script; it is easy to forget to pass in an argument. There are ways to handle this that don't involve using exception handling, but since indexing a list can potentially raise an exception, we should be ready to perform exception handling in our code. To handle this, we need to place our code that could raise an exception within a try statement and then except an exception if it happens and do something else. ~/exception_handling/using_try.py
import sys

try:
    print(f"Received argument {sys.argv[1]}")
except:
    print(f"Error: no arguments, please provide at least one argument")
    sys.exit(1)
This is the simplest kind of try/except, and this will catch any exception that might be raised by the code inside the try block. If we run this file again, we should see our print out.
$ python3.7 using_try.py
Error: no arguments, please provide at least one argument
$ echo $?
1
Since this is still a case where we want to terminate our script, we're going to explicitly exit with a non-zero status code to indicate the user that things didn't go according to plan, but we're also able to shield the user from seeing the Python traceback and provide a better experience to anyone using our script. We could make this more specific and only except a very specific exception, and even have multiple separate except blocks catching different kinds of exceptions. Let's introduce another potential exception and catch the exceptions separately: ~/exception_handling/using_try.py
import sys

try:
    print(f"First argument {sys.argv[1]}")
    args = sys.argv
    random.shuffle(args)
    print(f"Random argument {args[0]}")
except IndexError as err:
    print(f"Error: no arguments, please provide at least one argument ({err})")
    sys.exit(1)
except NameError:
    print(f"Error: random module not loaded")
    sys.exit(1)
Let's run this without an argument and then with an argument:
$ python3.7 using_try.py
Error: no arguments, please provide at least one argument (list index out of range)
$ python3.7 using_try.py testing
First argument testing
Error: random module not loaded
Notice that like a conditional statement with multiple branches, only one of the except blocks will run because as soon as an exception occurs, the execution of the try block stops. Additionally, we can assign the exception that is raised to a variable so that we can get more information from it by adding as <identifier> to our except clause. The else and finally Statements Now we're able to handle exceptions, but the exception handling workflow also facilitates a way for us to run code if no exception gets caught using else, and there's also a way to run some code after any error handling, or the else block, by using finally. Since we're using sys.exit, we wouldn't be able to use finally as is, but let's make some modifications to see how both of these work. ~/exception_handling/using_try.py
import sys
import random

try:
    print(f"First argument {sys.argv[1]}")
    args = sys.argv
    random.shuffle(args)
    print(f"Random argument {args[0]}")
except (IndexError, KeyError) as err:
    print(f"Error: no arguments, please provide at least one argument ({err})")
except NameError:
    print(f"Error: random module not loaded")
else:
    print("Else is running")
finally:
    print("Finally is running")
We did import random so that it is possible to successfully run the script. Let's give this a run to see how it goes:
$ python3.7 using_try.py
Error: no arguments, please provide at least one argument (list index out of range)
Finally is running
$ python3.7 using_try.py testing
First argument testing
Random argument using_try.py
Else is running
Finally is running
$

Using Built-In Exceptions

00:05:10

Lesson Description:

When it comes to exceptions most of the time we'll be using exception handling, but sometimes we want to raise an exception from our code and then expect the code using our code to handle the exceptions that we might potentially raise. Documentation For This VideoPython Errors and Exceptions Documentation Python Exceptions DocumentationCreating an Exception To trigger an exception in Python code we need to utilize another keyword: raise. We refer to this as "raising an exception". Before we can raise an exception though, we need to create an exception, but thankfully exceptions are objects just like everything else. Hoping into the REPL, let's create our first exception:

$ python3.7
>>> err = Exception('something went wrong')
>>> err
Exception('something went wrong')
>>> str(err)
'something went wrong'
>>> dir(err)
['__cause__', '__class__', '__context__', '__delattr__', '__dict__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__get
attribute__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__le__', '__lt__', '__ne__', '__new__', '__reduce__', '__redu
ce_ex__', '__repr__', '__setattr__', '__setstate__', '__sizeof__', '__str__', '__subclasshook__', '__suppress_context__', '__traceb
ack__', 'args', 'with_traceback']
Notice that just creating an exception doesn't stop excecution. The Exception class is the parent class for most exceptions, and we can see these by using Exception.__subclasses__():
>>> Exception.__subclasses__()
[<class 'TypeError'>, <class 'StopAsyncIteration'>, <class 'StopIteration'>, <class 'ImportError'>, <class 'OSError'>, <class 'EOFError'>, <class 'RuntimeError'>, <class 'NameError'>, <class 'AttributeError'>, <class 'SyntaxError'>, <class 'LookupError'>, <class 'ValueError'>, <class 'AssertionError'>, <class 'ArithmeticError'>, <class 'SystemError'>, <class 'ReferenceError'>, <class 'MemoryError'>, <class 'BufferError'>, <class 'Warning'>, <class 'locale.Error'>, <class 're.error'>, <class 'sre_parse.Verbose'>]
Let's dig a little deeper into the inheritance struction here:
>>> Exception.__bases__
(<class 'BaseException'>,)
>>> BaseException.__bases__
(<class 'object'>,)
>>> BaseException.__subclasses__()
[<class 'Exception'>, <class 'GeneratorExit'>, <class 'SystemExit'>, <class 'KeyboardInterrupt'>]
Since BaseException only inherits from object it is essentially the parent of all errors, but we really won't ever use it. Raising an Exception Now that we know how to create an exception, let's go ahead and create and raise an exception from a new script called using_exceptions.py: ~/exception_handling/using_exceptions.py
import sys

if len(sys.argv) < 2:
    raise Exception('not enough arguments')

name = sys.argv[1]
print(f"Name is {name}")
Now, if we run into the situation where not enough arguments are provided when the script is run we can create and raise an exception with the message that we want. This puts us back to having a not great user experience for our script, but it's great for showcasing how to use exceptions. Let's put this to the test:
$ python3.7 using_exceptions.py
Traceback (most recent call last):
  File "using_exceptions.py", line 4, in <module>
    raise Exception("not enough arguments")
Exception: not enough arguments
$ python3.7 using_exceptions.py Keith
Name is Keith

Creating Custom Exception Types

00:04:30

Lesson Description:

Specific error types make it easier to tailor exception handling to handle different potential error cases, and sometimes we want to build something that could benefit from having a more detailed error type that doesn't exist. In this lesson, we'll learn how to create custom exception types. Documentation For This VideoPython Errors and Exceptions Tutorial Python Exceptions DocumentationCreating a Custom Exception Type Custom exception types are things that we'll use more often when we're building larger libraries that have different types of contextual errors. These custom exceptions can be caught by the code using our modules so that they only catch our exception and not more generic exceptions. We won't be creating a large library to showcase custom exceptions, but we can create a simple package that has an errors module, and then we can raise those errors in the code provided by our package. To begin, let's create a package called cli:

$ mkdir cli
$ touch cli/__init__.py
$ touch cli/errors.py
To follow what we've been doing around exceptions related to script arguments, let's create an ArgumentError class in our errors module: ~/exception_handling/cli/errors.py
class ArgumentError(Exception):
    pass
That's it! Now we have an identifier for our custom exception scenario, but exceptions all need to do the same things, and the Exception class defines all of that. Let's create a main function in our package's __init__.py, and then we can run that from a new script. ~/exception_handling/cli/__init__.py
import sys

from .errors import ArgumentError

def main():
    if len(sys.argv) < 2:
        raise ArgumentError('too few arguments')
    print(f"Name is {sys.argv[1]}")
Now from using_exceptions.py let's import our main function and use exception handling to catch our new ArgumentError: ~/exception_handling/using_exceptions.py
import sys

from cli import main
from cli.errors import ArgumentError

try:
    main()
except ArgumentError as err:
    print(f"Error: {err}")
    sys.exit(1)

Finally, let's run using_exceptions.py a few more times:
$ python3.7 using_exceptions.py
Error: too few arguments
s7-exceptions-and-exception-handling[master !?] -> python3.7 using_exceptions.py Keith
Name is Keith

Using Assertions

00:04:15

Lesson Description:

When we've raised exceptions to this point, we've done so because some criteria wasn't met. When developing and debugging code, this is something that can be done using the built-in assert keyword, and in this lesson, we'll give that a try. Documentation For This VideoPython Errors and Exceptions Tutorial Python Exceptions Documentation The assert statement The -O Python FlagWhat is an Assertion? Assertions are statements that raise an AssertionError if the passed in expression returns a False value (or something that would convert to False via the bool constructor). This is what we've been doing when we use code like this:

if len(sys.argv) < 2:
    raise ArgumentError('too few arguments')
`"

We can achieve this same thing using an `assert` statement, except it will raise an `AssertionError` instead of our custom `ArgumentError`. Here's what this would look like in our `main` function in the `cli/__init__.py`:

*~/exception_handling/cli/__init__.py*

```python
import sys

from .errors import ArgumentError

def main():
    # if len(sys.argv) < 2:
    #     raise ArgumentError("too few arguments")
    assert len(sys.argv) >= 2, "too few arguments"
    print(f"Name is {sys.argv[1]}")
We will need to change our using_exceptions.py file to except the AssertionError type to not break our program: ~/exception_handling/using_exceptions.py
import sys

from cli import main
from cli.errors import ArgumentError

try:
    main()
except (ArgumentError, AssertionError) as err:
    print(f"Error: {err}")
    sys.exit(1)
Let's run our script once again: " $ python3.7 using_exceptions.py Error: too few arguments $ python3.7 using_exceptions.py Keith Name is Keith" Assertion is a debugging Tool This isn't a great use of assert because assertions are a tool specifically designed for developing and debugging code, and the Python parser can remove assert statements when our script is run with the -O or -OO flags (capital `o' for "optimize").
$ python3.7 using_exceptions.py
Error: too few arguments
$ python3.7 -O using_exceptions.py
Traceback (most recent call last):
  File "using_exceptions.py", line 4, in <module>
    main()
  File "/home/cloud_user/exception_handling/cli/__init__.py", line 10, in main
    print(f"Name is {sys.argv[1]}")
IndexError: list index out of range
When we use the -O or -OO flags, we optimize our code to remove assertions (and docstrings with -OO), so we can't reliably use asserts to determine if we're going to raise errors.

Hands-on Labs are real live environments that put you in a real scenario to practice what you have learned without any other extra charge or account to manage.

00:45:00

Hands-on Labs are real live environments that put you in a real scenario to practice what you have learned without any other extra charge or account to manage.

00:45:00

Course Conclusion

Final Steps

How to Prepare for the Exam

00:02:12

Lesson Description:

To feel fully prepared to take the Certified Associate in Python Programming Certification Exam, do the following:Take and pass the practice exam multiple times Take a look at the study guide Do all of the hands-on labs in this courseThe majority of the exam involves determining what a snippet of code does. The code is intentionally written to have uninformative or confusing variable and function names, so be patient reading through the code in each question. For the exam, you'll need to answer 40 questions, and you have 65 minutes to do so. Registering for the Exam To register for the exam, go here to sign-in and schedule an exam with Pearson VUE. Once signed in, click "View Exams" and search for "PCAP". From here, you'll need to pick a testing center, schedule when you'll take your exam, and pay. Good luck!

What's Next After Certification?

00:01:29

Lesson Description:

Thanks for taking the time to go through this course! I hope you learned a lot, and I want to hear about it. Please take a moment to rate the course. It'll help with determining what works and what doesn't. Be sure to share your results in the community. Everyone wants to celebrate your successes with you.

Practice Exam

Certified Associate in Python Programming Certification

00:45:00

Take this course and learn a new skill today.

Transform your learning with our all access plan.

Start 7-Day Free Trial