Project

Profile

Help

Bug #6025

closed

Python Saxon not releasing memory

Added by Mark Pierce over 1 year ago. Updated over 1 year ago.

Status:
Closed
Priority:
Normal
Category:
Python
Start date:
2023-05-09
Due date:
% Done:

100%

Estimated time:
Applies to branch:
Fix Committed on Branch:
Fixed in Maintenance Release:
Found in version:
Fixed in version:
12.3
SaxonC Languages:
SaxonC Platforms:
SaxonC Architecture:

Description

I've recently built an AWS Lambda running Python 3.10 that takes a request, transforms it and returns the result.

This all seems fine and works quite well.

The issue is with each invocation the memory usage and runtime grows until it hits the limit and the lambda instance is killed off.

I've knocked up a very simple program that I can run locally which demonstrates this behaviour.

import psutil
from saxoncpe import *

process = psutil.Process()
start_memory = process.memory_info().rss

with PySaxonProcessor(license=True) as saxon_processor:
    xslt_proc = saxon_processor.new_xslt30_processor()
    document = saxon_processor.parse_xml(xml_text="<request></request>")
    executable = xslt_proc.compile_stylesheet(stylesheet_file="transforms/QuoteRequest-sample.xslt")

    values = range(1000)
    for i in values:
            executable.transform_to_string(xdm_node=document)
            print(process.memory_info().rss)

end_process = psutil.Process()
print("Start memory:")
print(start_memory)
print("End memory:")
print(process.memory_info().rss)

If you comment out all of the Saxon code then you will see the memory does not increase. If you move most of it out of the for loop, so it's only called once, and only have the transform_to_string function in the loop then the memory still increases.

What is cause the memory to be locked? How can I release it so it can be used in an AWS lambda with 100s and 1000s of requests?

Please register to edit this issue

Also available in: Atom PDF