What is concurrency in general?
Concurrency represents a method for significantly enhancing the efficiency of your Python applications. It enables multiple tasks to be executed simultaneously, optimizing the utilization of system’s resources. Python offers various techniques and libraries, including threading, multiprocessing, and asynchronous programming, to achieve concurrency.
Types of Concurrency in Python
It’s very important to note that there several form of concurrency. Subsequently Concurrency in computing can be achieved through various techniques, each serving different purposes and scenarios:
- Thread-Based Concurrency: This approach involves using threads, which are lightweight units of execution within a process. Threads share the same memory space and can execute tasks concurrently, making them suitable for I/O-bound operations. However, managing shared state between threads can lead to issues like race conditions and deadlocks. This is our focus on this network automation blog.
- Process-Based Concurrency: Processes are independent instances of a program, each with its own memory space. Process-based concurrency involves executing multiple processes concurrently, which can utilize multiple CPU cores effectively. Inter-process communication (IPC) mechanisms are used to facilitate communication and data exchange between processes.
- Asynchronous Programming: Asynchronous programming allows tasks to execute concurrently without relying on threads or processes. It’s commonly used in event-driven and non-blocking I/O applications, where tasks can be initiated and completed independently. Asynchronous programming in Python is typically implemented using async/await syntax and asyncio library.
- Parallelism: Parallelism involves executing multiple tasks simultaneously to improve performance. Unlike concurrency, which focuses on efficient utilization of resources, parallelism aims to execute tasks in parallel to reduce overall execution time. Parallelism can be achieved using techniques like multiprocessing, where multiple processes run in parallel, or distributed computing frameworks like Apache Spark and MPI (Message Passing Interface).
Each type of concurrency has its advantages and trade-offs, and the choice depends on factors such as the nature of the tasks, system resources, and programming requirements.
Concurrency via Threads in Python
Threading involves creating lightweight threads, often referred to as “worker threads,” in one go. Since these threads utilize the same memory space, they are particularly well-suited for tasks focused on input/output operations.
`ThreadPoolExecutor` is a class in Python’s `concurrent.futures` module that provides a high-level interface for managing a pool of worker threads. It allows you to execute callables (functions or methods) asynchronously across multiple threads, making it useful for parallelizing tasks that can be performed concurrently. The purpose of this post is to provide you python concurrency within the context of network automation.
While there are plenty of good blog posts that tackle this subject, it is rarely within the context of network automation field.
`ThreadPoolExecutor`
1. Thread Pool Management: `ThreadPoolExecutor` manages a pool of worker threads, allowing you to execute multiple tasks concurrently without having to manually manage threads.
2. Simple Interface: `ThreadPoolExecutor` provides a simple interface for submitting tasks to the thread pool using the `submit()` method. You can submit callables along with their arguments, and `ThreadPoolExecutor` takes care of executing them across the available worker threads.
3. Asynchronous Execution: Tasks submitted to a `ThreadPoolExecutor` are executed asynchronously, meaning that they can run concurrently with other tasks without blocking the main thread of execution. This allows for efficient utilization of system resources and improved performance.
4. Concurrency Control: `ThreadPoolExecutor` provides control over the maximum number of worker threads (`max_workers` parameter) and allows you to adjust the level of concurrency based on your specific requirements.
5. Task Futures: When you submit a task to a `ThreadPoolExecutor`, it returns a future object representing the result of the task. You can use this future object to monitor the status of the task, retrieve its result, or handle any exceptions that occurred during its execution.
Overall, `ThreadPoolExecutor` simplifies the process of parallelizing tasks in Python by providing a high-level interface for managing a pool of worker threads. It is particularly useful for I/O-bound tasks, such as network requests or file I/O operations, where multiple tasks can be performed concurrently without CPU-bound computation.
Concurrency: Asynchronous vs ThreadPool
Let’s do a quick comparison between two types of concurrency commonly utilised in Python.
ThreadPoolExecutor and asynchronous programming (often using async and await in Python) are both mechanisms for concurrency, but they differ in how they achieve concurrency and how they handle blocking operations.
ThreadPoolExecutor
• ThreadPoolExecutor is part of the concurrent.futures module in Python and provides a high-level interface for asynchronously executing functions in separate threads.
• It creates a pool of worker threads and submits tasks to this pool. Each task is executed independently in its own thread, allowing multiple tasks to run concurrently.
• ThreadPoolExecutor is suitable for I/O-bound tasks, such as network requests or file I/O operations, where the threads spend most of their time waiting for external resources.
• However, it does not provide true asynchronous behavior because it uses threads, which are limited by the Global Interpreter Lock (GIL) in CPython, preventing true parallelism.
Asynchronous Programming
• Asynchronous programming, often using async and await keywords in Python, allows you to write non-blocking, concurrent code that can handle multiple operations simultaneously without using threads.
• Asynchronous programming relies on cooperative multitasking, where tasks voluntarily give up control to allow other tasks to run. This is achieved through the use of coroutines, which are functions that can pause and resume execution.
• Asynchronous programming is suitable for both I/O-bound and CPU-bound tasks. It excels in scenarios where many I/O operations are involved, as it allows the program to perform other tasks while waiting for I/O operations to complete.
• It can be more efficient than using threads because it avoids the overhead of thread creation and context switching.
In summary, ThreadPoolExecutor is a mechanism for running tasks concurrently using threads, while asynchronous programming is a programming paradigm that allows for non-blocking, concurrent code execution without using threads.
Threading Concurrency in Network Automation
Here’s a simple example that is relevant to network automation. This demonstrates how to use `ThreadPoolExecutor` with Netmiko to connect to multiple devices concurrently:

With two worker threads.
In this example:
1. We define a list of `devices` with their connection information.
2. We define a function `execute_command(device)` that establishes a connection to the device using Netmiko and executes the command `’show version’`.
3. We create a `ThreadPoolExecutor` with a maximum of 2 worker threads. This means that a maximum of 2 devices will be connected to concurrently. Each device will be managed by a dedicated worker thread.
4. We submit each device to the executor using `executor.submit()`. This schedules the execution of the `execute_command()` function for each device and returns a future object representing the result of the execution.
5. We wait for all futures to complete by iterating over them and calling `future.result()`. This blocks until the result of each future is available.

Using `ThreadPoolExecutor` allows us to connect to multiple devices concurrently, which can significantly reduce the total execution time when dealing with a large number of devices. However, it’s important to be mindful of the maximum number of worker threads to avoid overwhelming the network or the devices themselves. Adjust the `max_workers` parameter based on your specific requirements and the capabilities of your network and devices.
`ThreadPoolExecutor` not only allows us to establish connections to different devices concurrently but also enables us to execute commands on those devices concurrently.
In the example provided, we define a function `execute_command(device)` that establishes a connection to a device using Netmiko and executes the command `’show version’`. This function is then submitted to the `ThreadPoolExecutor` for each device in the `devices` list.
By using a thread pool, the `ThreadPoolExecutor` manages the execution of these functions across multiple worker threads. Each worker thread is responsible for executing the `execute_command()` function for a specific device.
Since we have specified `max_workers=2`, the `ThreadPoolExecutor` will ensure that a maximum of 2 worker threads are active at any given time. As a result, the commands `’show version’` will be executed concurrently on the different devices, with up to 2 devices being processed simultaneously.
Concurrent execution of commands can significantly reduce the overall execution time, especially when dealing with a large number of devices or when the commands have long response times. It allows us to make efficient use of available resources and improves the overall performance of our network automation tasks.





Leave a comment