Customizing pyATS – AeTest – Part 1

I’ve been through the use case for AeTest and how it can take multiple testing and verification scripts and run them as a batch whilst providing a really cool HTML reporting feature.

Now perhaps we want to look at how we can put our own stamp on things here. Let’s start with the job file itself and this is an example of what you might have:

"""
network_test_job.py

"""

import os
from pyats.easypy import run

# compute the script path from this location
SCRIPT_PATH = os.path.dirname(__file__)


def main(runtime):
    """job file entrypoint"""
    var_dict = runtime.synchro.dict()
    shared_list = runtime.synchro.list()
    datafile = os.path.join(SCRIPT_PATH)
    # run script
    run(
        testscript=os.path.join(SCRIPT_PATH, "admin_show_platform.py"),
        runtime=runtime,
        taskid="Checking the status of the line cards",
    )

    run(    
        testscript=os.path.join(SCRIPT_PATH, "show_memory_summary.py"),
        runtime=runtime,
        taskid="Checking the status of the memory of the platform",
    )

    run( 
        testscript=os.path.join(SCRIPT_PATH, "ping_test.py"),
        runtime=runtime,
        taskid="Checking the connectivity to all the nodes",

    )

We have three AeTest scripts here which we are running and we manually specify those within the run() function. This is fine but what if we wanted to change the order of those scripts so that ping test ran before the admin show platform and so on. Maybe we have twenty five plus scripts to run and the ordering maybe different on each run. To achieve that we would need to manually edit this file and make those changes but what if the person running the script is not a coder? Also we have risks with someone keep going into this file and changing things on the fly. Perhaps we want to change the Ping destination IPs and VRFs that we use on the ping_test script as well? All of these requirements are custom.

My way of handling the custom aspects is through use of a YAML file. As you probably will know about YAML it’s a structured data format which is very human readable. Perhaps what makes it so appealing is that the data contained within the YAML can easily port into a Python Dictionary. Once in a dictionary then we can call that information into our scripts instead of manually hard coding into the script and leave all the complexity in the YAML file.

Let’s start to build a YAML file with some data. I’ve called it customize.yaml

Jobs:

  Script1 :
    index: 1 
    name: show_processes_cpu.py  
    taskid: Check the CPU Utilisation is not greater than 75 percent

  Script2 :
    index: 2
    name: show_bfd_sessions.py
    taskid: Check the BFD neighbors are Operational
  
  Script3 :
    index: 3
    name: admin_show_platform.py
    taskid: Check the line cards are Operational

  Script4 :
    index: 4
    name: ping_test.py
    taskid: Run a PING Test 

Ping_Dest:

 dest1 :
    index: 1 
    vrf: default  
    destination: 192.168.1.254
    count: 10

 dest2 :
    index: 2 
    vrf: default  
    destination: 192.168.1.212 
    count: 5

 dest3 :
    index: 3 
    vrf: default  
    destination: 192.168.1.214
    count: 50

You can probably see that I have placed the list of pyATS jobs into this yaml file and also a list of data for doing PINGs. I’ve put an index value in there just because I want some unique reference in there for future use but I am not using it at the moment. You can imagine, I can now customize everything in here in terms of the number of scripts that I run. I can also add more PING destinations, I can change the VRF and the PING count. I could also add in other parameters like the PING Source interface or IP address as another field if I wanted to.

The beauty of this is having a data repository which can be easily edited or modified which will port straight into a dictionary. The script below does just that:

import yaml

with open(f'./customize.yaml') as file:

    parameters_list = yaml.load(file, Loader=yaml.FullLoader)

job_dict = parameters_list['Jobs']
basic_ping_dict = parameters_list['Ping_Dest']

# Python debugger to see the data
import ipdb;
ipdb.set_trace()

We import the yaml module and then we open our customize.yaml file. The Loader=yaml.FullLoader is going to take that file and put it straight into a python dictionary placed within a variable called parameters_list

ipdb> type(parameters_list)                                                                                                                                                                
<class 'dict'>

ipdb> parameters_list                                                                                                                                                                      
{'Jobs': {'Script1': {'index': 1, 'name': 'show_processes_cpu.py', 'taskid': 'Check the CPU Utilisation is not greater than 75 percent'}, 'Script2': {'index': 2, 'name': 'show_bfd_sessions.py', 'taskid': 'Check the BFD neighbors are Operational'}, 'Script3': {'index': 3, 'name': 'admin_show_platform.py', 'taskid': 'Check the line cards are Operational'}, 'Script4': {'index': 4, 'name': 'ping_test.py', 'taskid': 'Run a PING Test'}}, 'Ping_Dest': {'dest1': {'index': 1, 'vrf': 'default', 'destination': '192.168.1.254', 'count': 10}, 'dest2': {'index': 2, 'vrf': 'default', 'destination': '192.168.1.212', 'count': 5}, 'dest3': {'index': 3, 'vrf': 'default', 'destination': '192.168.1.214', 'count': 50}}}

I think it’s so cool that yaml maps straight into a dictionary like that. Hopefully, you’re getting the idea that we can map almost anything in here that we want to change within our scripts then just reference back to it. In this case I have created two new variables (job_dict and basic_ping_dict) to be the containers for this data.

Now we’ve completed stage one, we have custom data but I now need to make use of it. There are several ways we can do this but my preference is to create a simple module which we can import into scripts so that these dictionaries can be called. I could just have a set of functions or even just something along the lines of my initial yaml to dict script. However, I’m trying to create something which would be easy to add new features into later without re-working (at least that is the principal.)

import yaml

class CustomBits:

    def __init__(self):
        self.job_dict={}
        self.basic_ping_dict ={}
        
        with open(f'./customize.yaml') as file:
            self.parameters_list = yaml.load(file, Loader=yaml.FullLoader)
        

    def joblist(self):

        self.job_dict = self.parameters_list['Jobs']
        return self.job_dict
    
    def ping_dest(self):
        self.basic_ping_dict = self.parameters_list['Ping_Dest']
        return self.basic_ping_dict

I’ve created a class with two functions which represent the job list and the ping parameters. I’ve also placed a constraint that the module and the customize.yaml are in the same directory but this is easily changed and could even be an argument if you wanted to change that. The module is called “CustomBits” (the file is custombits.py) and we are now going to try it out within our job file first. This is the updated script and there is walkthrough below:

"""
network_test_job.py

Example multi-testscript job file

"""

import os
from pyats.easypy import run
from custombits import CustomBits

# compute the script path from this location
SCRIPT_PATH = os.path.dirname(__file__)

def main(runtime):
    """job file entrypoint"""

    # run script
    job_dict = CustomBits().joblist()
    for script in job_dict:
        name = job_dict[script]['name']
        taskid = job_dict[script]['taskid']
        run(
            testscript=os.path.join(SCRIPT_PATH, name),
            runtime=runtime,
            taskid=taskid,
        )
    

First we need to import our new module:

from custombits import CustomBits

We call the module and the function called joblist which we map into a new variable called job_dict which contains our list of jobs. Since we might have 1,10 or even 50 scripts in our yaml file then we need a for/loop to iterate over each one for the job file.

 for script in job_dict:
        name = job_dict[script]['name']
        taskid = job_dict[script]['taskid']
        

We then just map the name and taskid variables into the run object which will contain all the jobs we’re running in AeTest as well keeping the ordering aligned with the data that we provided.

In summary, we now have a yaml file and a small module which we can customize in any way in terms of which scripts we run under AeTest and the ordering of those scripts. Sticking with a standard job file, we’ll never need to change the job file as all the heavy lifting and customization is in the yaml file instead. In the next one, we look at the PING script and how that is customized in the same way using the same YAML datastore.

Published by gwoodwa1

IP Network Design and coding hobbyist

Leave a comment

Design a site like this with WordPress.com
Get started