Depending on your solution, running a single threaded PyRFC server (like the one we setup in part 1 of this series) may not work for your data needs. In our case, we needed the ability to process multiple request simultaneously. In part three of this series, we will discuss how to scale your PyRFC server horizontally.

PyRFC Difficulties

The PyRFC module isn’t capable of handling multiple calls simultaneously, out of the box. To fix this our initial thought was to create a queue to receive all of the incoming calls to our server and have multiple workers pulling from the queue. However, upon doing further digging into the module and reviewing the _pyrfc.pyx file we noticed that the RfcListenAndDispatch function blocks the main thread until a response is returned.

The RfcListenAndDispatch function, is triggered upon calling serve from our server module. This function is used to call the associated functions registered on the server. When trying to modify the underlying cython, we had difficulties architecting a design that would have our server continuously listening for incoming request while processing an existing request.

Our PyRFC Scaling Solution

In order to be able to handle multiple request from SAP, we identified a simpler solution. We decided to spin up multiple docker containers of our existing PyRFC server. By doing so, we were able to create multiple connections at the NW RFC Gateway that incoming request would be able to use.

Why a Better Approach is Needed

Having multiple servers register at the gateway with the same program id, allows for us to handle as many request as there are connections. While this solves are initial problem of scaling, there are some drawbacks that need to be addressed.

  1. Depending on what systems are calling our server and when, this approach may not be effective for the fact our server has no way of knowing how many connections are available. In addition to the fact, that if all of the connections are in use incoming request will be dropped. Either the calling system must contain logic to always know the status of RFC connections or extra logic is needed on our server to use PyRFC to check the status of calls and connections at the gateway (GWY_READ_CONNECTIONS2). Both of which aren’t ideal.
  2. The connections initially established will persist even if not being used, unless an error with the connection occurs. To ensure connections aren’t idle and exhausting resources, extra logic would need to be created at the server to automatically scale up and down our application. Container management services wouldn’t be applicable here because the information that is needed to dictate scaling actions aren’t at the container level but rather at the gateway. Since each container is essentially one connection, CPU or Memory metrics wouldn’t be valuable indicators to determine whether to scale up or scale down the number of running containers.


When looking to horizontally scale your PyRFC application, spinning up multiple instances of your PyRFC server is the simplest way to do so. While it may be the simplest, depending on your design it may not be the cleanest. For this reason we will discuss a cleaner solution for scaling our recently built PyRFC application in our next post.

Have questions? Contact us today.

About the author