Project

Profile

Help

Bug #5787

closed

muitithreading example with preview of SaxonC 12: Fatal error: StackOverflowError: Enabling the yellow zone of the stack did not make any stack space available

Added by O'Neil Delpratt almost 2 years ago. Updated almost 2 years ago.

Status:
Closed
Priority:
Low
Category:
C++ API
Start date:
2023-01-03
Due date:
% Done:

100%

Estimated time:
Applies to branch:
Fix Committed on Branch:
Fixed in Maintenance Release:
Found in version:
12.0
Fixed in version:
12.1
SaxonC Languages:
SaxonC Platforms:
SaxonC Architecture:

Description

User reported crash here: https://saxonica.plan.io/boards/4/topics/9158 and https://saxonica.plan.io/boards/4/topics/9160

SaxonC 11.99 crashes with the following error:

muitithreading example with preview of SaxonC 12: Fatal error: StackOverflowError: Enabling the yellow zone of the stack did not make any stack space available

The failures happen under flask, but will try to reproduce it outside of flask. See example code below:

from flask import Flask

from flask import g

from saxonc import *

def get_saxon():
    if 'saxon' not in g:
        g.saxon = PySaxonProcessor(license = False)
    return g.saxon


app = Flask(__name__)


@app.route('/')
def hello_world():  # put application's code here
    return 'Hello World!'

@app.route('/saxontest2')
def saxontest2():
    """Returns result from Saxon XQuery evaluation"""
    return get_saxon().new_xquery_processor().run_query_to_string(query_text = '"Hello from Saxon: " ||  current-dateTime()')

@app.teardown_appcontext
def teardown_saxonc_thread(exception):
    saxon = g.pop('saxon', None)
    if saxon is not None:
        saxon.detach_current_thread

if __name__ == '__main__':
    app.run()

Another example code which does not require flask:

import threading
import time

from saxonc import *

exitFlag = 0


class myThread(threading.Thread):
    def __init__(self, threadID, name, counter, proc, xquery_processor):
        threading.Thread.__init__(self)
        self.threadID = threadID
        self.name = name
        self.counter = counter
        self.xquery_processor = xquery_processor
        self.proc = proc

    def run(self):
        print("Starting " + self.name)
        example(self.name, self.counter, 5, self.proc, self.xquery_processor)
        print("Exiting " + self.name)


def example(threadName, delay, counter, proc, xquery_processor):
    while counter > 0:
        if exitFlag:
            threadName.exit()
        time.sleep(delay)

        result = xquery_processor.run_query_to_string(query_text='"Hello from SaxonC: " || current-dateTime()')

        if result is None:
            threadName.exit()

        print("%s : %s" % (threadName, result))
        counter -= 1
        proc.detach_current_thread


def example2(threadName, delay, counter, xquery_processor):
    while counter > 0:
        if exitFlag:
            return
        time.sleep(delay)
        result = xquery_processor.run_query_to_string(query_text='"Hello from SaxonC: " || current-dateTime()')
        if result is None:
            return
        print("%s : %s" % (threadName, result))
        counter -= 1


saxon_proc = PySaxonProcessor(license=False)

# Create new threads
thread1 = myThread(1, "Thread-1", 1, saxon_proc, saxon_proc.new_xquery_processor())
thread2 = myThread(2, "Thread-2", 2, saxon_proc, saxon_proc.new_xquery_processor())

# Start new Threads
thread1.start()
thread2.start()
thread1.join()
thread2.join()
print("Exiting Main Thread")
Actions #1

Updated by Martin Honnen almost 2 years ago

Some updates here, trying to run my Python Flask code using SaxonC (now the HE 12.0 release) continues to fail under WSL Ubuntu Linux on the very first attempt to use an SaxonC feature like XPath or XQuery or XSLT; here is an example stack on trying to evaluate XPath current-dateTime() with the REST API:

Stacktrace for the failing thread 0x000056437c716000:
  SP 0x00007ff6e3ffc830 IP 0x00007ff6f3f6fe20  [image code] com.oracle.svm.core.jdk.VMErrorSubstitutions.shutdown(VMErrorSubstitutions.java:116)
  SP 0x00007ff6e3ffc830 IP 0x00007ff6f3f6fe20  [image code] com.oracle.svm.core.jdk.VMErrorSubstitutions.shouldNotReachHere(VMErrorSubstitutions.java:109)
  SP 0x00007ff6e3ffc860 IP 0x00007ff6f3fb9de4  [image code] com.oracle.svm.core.util.VMError.shouldNotReachHere(VMError.java:65)
  SP 0x00007ff6e3ffc870 IP 0x00007ff6f3f5ec3c  [image code] com.oracle.svm.core.graal.snippets.StackOverflowCheckImpl.onYellowZoneMadeAvailable(StackOverflowCheckImpl.java:197)
  SP 0x00007ff6e3ffc890 IP 0x00007ff6f3f5ef78  [image code] com.oracle.svm.core.graal.snippets.StackOverflowCheckImpl.makeYellowZoneAvailable(StackOverflowCheckImpl.java:172)
  SP 0x00007ff6e3ffc890 IP 0x00007ff6f3f5ef78  [image code] com.oracle.svm.core.graal.snippets.StackOverflowCheckImpl.throwNewStackOverflowError(StackOverflowCheckImpl.java:308)
  SP 0x00007ff6e3ffca10 IP 0x00007ff6f4d427ad  [image code] net.sf.saxon.option.cpp.ProcessorDataAccumulator.createProcessorDataWithCapacity(ProcessorDataAccumulator.java:73)
  SP 0x00007ff6e3ffca30 IP 0x00007ff6f3f1a237  [image code] com.oracle.svm.core.code.IsolateEnterStub.ProcessorDataAccumulator_createProcessorDataWithCapacity_e499d2c6827affd91155b3051b71608febebe62c(IsolateEnterStub.java:0)

The full output is rather longish and I can't interpret much of it but for completeness I paste it below as well; what is more interesting, however, is that I also tried to upload the app to an azure Linux app service to see whether there on a server (probably in a docker container) it crashes that often and is kind of unusable as well but to my surprise it show a lot more stability there than on my attempts to run the app locally under WSL Linux. I don't know what is the difference so far, whether the "production WSGI server" they probably use on the azure app service is so much more stable than the development server Flask uses locally or what kind of settings do make the difference.

(saxonc-workbench) mh@LibertyDell:~/saxonc-workbench$ flask run
 * Debug mode: off
WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
 * Running on http://127.0.0.1:5000
Press CTRL+C to quit
127.0.0.1 - - [23/Jan/2023 11:10:11] "GET / HTTP/1.1" 304 -
127.0.0.1 - - [23/Jan/2023 11:10:11] "GET /static/css/ace-fiddle.css HTTP/1.1" 304 -
127.0.0.1 - - [23/Jan/2023 11:10:11] "GET /static/js/frame-write.js HTTP/1.1" 304 -
127.0.0.1 - - [23/Jan/2023 11:10:11] "GET /static/js/ace-modes.js HTTP/1.1" 304 -
127.0.0.1 - - [23/Jan/2023 11:10:11] "GET /static/js/xpath.js HTTP/1.1" 304 -
127.0.0.1 - - [23/Jan/2023 11:10:11] "GET /static/js/transform.js HTTP/1.1" 304 -
127.0.0.1 - - [23/Jan/2023 11:10:11] "GET /static/js/xquery.js HTTP/1.1" 304 -
127.0.0.1 - - [23/Jan/2023 11:10:11] "GET /static/js/ace-editors-init.js HTTP/1.1" 304 -
127.0.0.1 - - [23/Jan/2023 11:10:11] "GET /static/js/init-examples.js HTTP/1.1" 304 -
127.0.0.1 - - [23/Jan/2023 11:10:11] "GET /static/js/event-listeners-init.js HTTP/1.1" 304 -
127.0.0.1 - - [23/Jan/2023 11:10:11] "GET /static/examples/defaults/default.xml HTTP/1.1" 304 -
127.0.0.1 - - [23/Jan/2023 11:10:11] "GET /static/examples/defaults/default.xsl HTTP/1.1" 304 -
127.0.0.1 - - [23/Jan/2023 11:10:11] "GET /static/examples/defaults/default.xml HTTP/1.1" 304 -
127.0.0.1 - - [23/Jan/2023 11:10:11] "GET /static/examples/defaults/default.xsl HTTP/1.1" 304 -
Fatal error: StackOverflowError: Enabling the yellow zone of the stack did not make any stack space available. Possible reasons for that: 1) A call from native code to Java code provided the wrong JNI environment or the wrong IsolateThread; 2) Frames of native code filled the stack, and now there is not even enough stack space left to throw a regular StackOverflowError; 3) An internal VM error occurred.

Current timestamp: 1674468637664

Printing Instructions (ip=0x00007ff6f3f6fe20):
  0x00007ff6f3f6fe00: 0x00 0x41 0xc7 0x87 0xfc 0x00 0x00 0x00 0xfe 0xfe 0xfe 0x7e 0x48 0x8b 0x7c 0x24
  0x00007ff6f3f6fe10: 0x20 0x48 0x8b 0x74 0x24 0x10 0x48 0x8b 0x54 0x24 0x18 0xe8 0x90 0xfc 0xff 0xff
  0x00007ff6f3f6fe20: 0x90 0xcc 0xcc 0xcc 0xcc 0xcc 0xcc 0xcc 0xcc 0xcc 0xcc 0xcc 0xcc 0xcc 0xcc 0xcc
  0x00007ff6f3f6fe30: 0x48 0x83 0xec 0x18 0x49 0x3b 0x67 0x08 0x0f 0x86 0x9f 0x00 0x00 0x00 0x48 0x8b

Top of stack (sp=0x00007ff6e3ffc830):
  0x00007ff6e3ffc830: 0x0000000000000078 0x00007ff6f6fda8c0
  0x00007ff6e3ffc840: 0x00007ff6f27bf0c0 0x00007ff6f2300000
  0x00007ff6e3ffc850: 0x00007ff6f3f5ec3c 0x00007ff6f3fb9de4
  0x00007ff6e3ffc860: 0x00007ff6f2206a40 0x00007ff6f3f5ec3c
  0x00007ff6e3ffc870: 0x00007ff6f7eb7cb8 0x000056437a998abb
  0x00007ff6e3ffc880: 0x00007ff6e3ffc920 0x00007ff6f3f5ef78
  0x00007ff6e3ffc890: 0x00007ff6f6fde110 0x00007ff6f6fed840
  0x00007ff6e3ffc8a0: 0x0000000000000001 0x0000000000000011
  0x00007ff6e3ffc8b0: 0xffffffffffffffff 0xffffffffffffffff
  0x00007ff6e3ffc8c0: 0x0000000000000041 0x0000000000000041
  0x00007ff6e3ffc8d0: 0x00007ff6f75d7664 0x0000000000000000
  0x00007ff6e3ffc8e0: 0x00007ff6f6e51a01 0x00007ff6f75d7664
  0x00007ff6e3ffc8f0: 0x0000000000000001 0x0000000000000000
  0x00007ff6e3ffc900: 0x0000000000000000 0x0000000000000000
  0x00007ff6e3ffc910: 0x0000000000000000 0x0000000000000000
  0x00007ff6e3ffc920: 0x0000000000000041 0x0000000000000041
  0x00007ff6e3ffc930: 0x00007ff6f6e4d360 0x00007ff6f6e4d360
  0x00007ff6e3ffc940: 0x00007ff6f6e4d3a1 0x00007ff6f6e4d360
  0x00007ff6e3ffc950: 0x0000000000000000 0x0000000000000000
  0x00007ff6e3ffc960: 0x00007ff6f2274000 0x000056437adaed90
  0x00007ff6e3ffc970: 0xffffffffffffffff 0xffffffffffffffff
  0x00007ff6e3ffc980: 0x000056437adaed90 0x0000000000000000
  0x00007ff6e3ffc990: 0x0000000000000000 0x0000000000000000
  0x00007ff6e3ffc9a0: 0x00007ff6f7f7c070 0x000056437c71e350
  0x00007ff6e3ffc9b0: 0x00007ff6f6e42c30 0x000056437adaed60
  0x00007ff6e3ffc9c0: 0x0000000000000002 0x0000000000000001
  0x00007ff6e3ffc9d0: 0x000056437c716000 0x0000000000000001
  0x00007ff6e3ffc9e0: 0x000056437c71e300 0x000056437c71e300
  0x00007ff6e3ffc9f0: 0x000056437ada1340 0x00007ff6f6d52b78
  0x00007ff6e3ffca00: 0x000056437c716000 0x00007ff6f4d427ad
  0x00007ff6e3ffca10: 0x0000000000000078 0x000056437adbe4a8
  0x00007ff6e3ffca20: 0x00007ff6f6fde110 0x00007ff6f3f1a237

Top frame info:
  TotalFrameSize in CodeInfoTable 48

Threads:
  0x00007ff6ec000b80 STATUS_IN_NATIVE (ALLOW_SAFEPOINT) "Reference Handler" - 0x00007ff6f2e68418, daemon, stack(0x00007ff6f1901000,0x00007ff6f2100000)
  0x000056437c716000 STATUS_IN_JAVA (PREVENT_VM_FROM_REACHING_SAFEPOINT) "main" - 0x00007ff6f2e68360, stack(0x00007ffc09c48000,0x00007ffc0a446000)

VM thread locals for the failing thread 0x000056437c716000:
  0 (8 bytes): JNIThreadLocalEnvironment.jniFunctions = (bytes)
    0x000056437c716000: 0x00007ff6f2a10010
  8 (8 bytes): StackOverflowCheckImpl.stackBoundaryTL = (Word) 1 (0x0000000000000001)
  16 (4 bytes): Safepoint.safepointRequested = (int) 2147400624 (0x7ffebbb0)
  20 (4 bytes): StatusSupport.statusTL = (int) 1 (0x00000001)
  24 (32 bytes): ThreadLocalAllocation.regularTLAB = (bytes)
    0x000056437c716018: 0x00007ff6f2100000 0x00007ff6f2200000
    0x000056437c716028: 0x00007ff6f217d548 0x0000000000000000
  56 (8 bytes): PlatformThreads.currentThread = (Object) java.lang.Thread (0x00007ff6f2e68360)
  64 (8 bytes): JavaFrameAnchors.lastAnchor = (Word) 0 (0x0000000000000000)
  72 (8 bytes): AccessControlContextStack = (Object) java.util.ArrayDeque (0x00007ff6f2148ba0)
  80 (8 bytes): ExceptionUnwind.currentException = (Object) null
  88 (8 bytes): IdentityHashCodeSupport.hashCodeGeneratorTL = (Object) java.util.SplittableRandom (0x00007ff6f2148560)
  96 (8 bytes): IsolatedCompileClient.currentClient = (Object) null
  104 (8 bytes): IsolatedCompileContext.currentContext = (Object) null
  112 (8 bytes): JNIObjectHandles.handles = (Object) com.oracle.svm.core.handles.ThreadLocalHandles (0x00007ff6f2101158)
  120 (8 bytes): JNIThreadLocalPendingException.pendingException = (Object) null
  128 (8 bytes): JNIThreadLocalPinnedObjects.pinnedObjectsListHead = (Object) null
  136 (8 bytes): JNIThreadOwnedMonitors.ownedMonitors = (Object) null
  144 (8 bytes): NoAllocationVerifier.openVerifiers = (Object) null
  152 (8 bytes): ThreadingSupportImpl.activeTimer = (Object) null
  160 (8 bytes): SubstrateDiagnostics.threadOnlyAttachedForCrashHandler = (bytes)
    0x000056437c7160a0: 0x0000000000000000
  168 (8 bytes): ThreadLocalAllocation.allocatedBytes = (Word) 0 (0x0000000000000000)
  176 (8 bytes): VMThreads.IsolateTL = (Word) 140698601914368 (0x00007ff6f2300000)
  184 (8 bytes): VMThreads.OSThreadHandleTL = (Word) 140698705633280 (0x00007ff6f85ea000)
  192 (8 bytes): VMThreads.OSThreadIdTL = (Word) 140698705633280 (0x00007ff6f85ea000)
  200 (8 bytes): VMThreads.StackBase = (Word) 140720480739328 (0x00007ffc0a446000)
  208 (8 bytes): VMThreads.StackEnd = (Word) 140720472358912 (0x00007ffc09c48000)
  216 (8 bytes): VMThreads.StartedByCurrentIsolate = (bytes)
    0x000056437c7160d8: 0x0000000000000000
  224 (8 bytes): VMThreads.nextTL = (Word) 0 (0x0000000000000000)
  232 (8 bytes): VMThreads.unalignedIsolateThreadMemoryTL = (Word) 94847850602496 (0x000056437c716000)
  240 (4 bytes): ActionOnExitSafepointSupport.actionTL = (int) 0 (0x00000000)
  244 (4 bytes): ActionOnTransitionToJavaSupport.actionTL = (int) 0 (0x00000000)
  248 (4 bytes): ImplicitExceptions.implicitExceptionsAreFatal = (int) 0 (0x00000000)
  252 (4 bytes): StackOverflowCheckImpl.yellowZoneStateTL = (int) 2130640638 (0x7efefefe)
  256 (4 bytes): StatusSupport.safepointBehaviorTL = (int) 1 (0x00000001)
  260 (4 bytes): ThreadingSupportImpl.currentPauseDepth = (int) 1 (0x00000001)

No VMOperation in progress

The 15 most recent VM operation status changes (oldest first):

Counters:

Java frame anchors for the failing thread 0x000056437c716000:
  No anchors

Stacktrace for the failing thread 0x000056437c716000:
  SP 0x00007ff6e3ffc830 IP 0x00007ff6f3f6fe20  [image code] com.oracle.svm.core.jdk.VMErrorSubstitutions.shutdown(VMErrorSubstitutions.java:116)
  SP 0x00007ff6e3ffc830 IP 0x00007ff6f3f6fe20  [image code] com.oracle.svm.core.jdk.VMErrorSubstitutions.shouldNotReachHere(VMErrorSubstitutions.java:109)
  SP 0x00007ff6e3ffc860 IP 0x00007ff6f3fb9de4  [image code] com.oracle.svm.core.util.VMError.shouldNotReachHere(VMError.java:65)
  SP 0x00007ff6e3ffc870 IP 0x00007ff6f3f5ec3c  [image code] com.oracle.svm.core.graal.snippets.StackOverflowCheckImpl.onYellowZoneMadeAvailable(StackOverflowCheckImpl.java:197)
  SP 0x00007ff6e3ffc890 IP 0x00007ff6f3f5ef78  [image code] com.oracle.svm.core.graal.snippets.StackOverflowCheckImpl.makeYellowZoneAvailable(StackOverflowCheckImpl.java:172)
  SP 0x00007ff6e3ffc890 IP 0x00007ff6f3f5ef78  [image code] com.oracle.svm.core.graal.snippets.StackOverflowCheckImpl.throwNewStackOverflowError(StackOverflowCheckImpl.java:308)
  SP 0x00007ff6e3ffca10 IP 0x00007ff6f4d427ad  [image code] net.sf.saxon.option.cpp.ProcessorDataAccumulator.createProcessorDataWithCapacity(ProcessorDataAccumulator.java:73)
  SP 0x00007ff6e3ffca30 IP 0x00007ff6f3f1a237  [image code] com.oracle.svm.core.code.IsolateEnterStub.ProcessorDataAccumulator_createProcessorDataWithCapacity_e499d2c6827affd91155b3051b71608febebe62c(IsolateEnterStub.java:0)

VM mutexes:
  mutex "referencePendingList" is unlocked.
  mutex "mainVMOperationControlWorkQueue" is unlocked.
  mutex "thread" is unlocked.

AOT compiled code is mapped at 0x00007ff6f3ed8000 - 0x00007ff6f56ea34f

Heap settings and statistics:
  Supports isolates: true
  Heap base: 0x00007ff6f2300000
  Object reference size: 8
  Aligned chunk size: 1048576
  Incremental collections: 0
  Complete collections: 0

Native image heap boundaries:
  ReadOnly Primitives: 0x00007ff6f2401028 - 0x00007ff6f2734840
  ReadOnly References: 0x00007ff6f2734840 - 0x00007ff6f2a0f528
  ReadOnly Relocatables: 0x00007ff6f2a10000 - 0x00007ff6f2c4bb50
  Writable Primitives: 0x00007ff6f2c4c000 - 0x00007ff6f2d75ab8
  Writable References: 0x00007ff6f2d75ab8 - 0x00007ff6f32195a8
  Writable Huge: 0x00007ff6f3300030 - 0x00007ff6f336f938
  ReadOnly Huge: 0x00007ff6f3370030 - 0x00007ff6f394ce18

Heap:
  Young generation:
    Eden:
      edenSpace:
        aligned: 0/0 unaligned: 0/0
    Survivors:
      Survivor-1 From:
        aligned: 0/0 unaligned: 0/0
      Survivor-1 To:
        aligned: 0/0 unaligned: 0/0
      Survivor-2 From:
        aligned: 0/0 unaligned: 0/0
      Survivor-2 To:
        aligned: 0/0 unaligned: 0/0
      Survivor-3 From:
        aligned: 0/0 unaligned: 0/0
      Survivor-3 To:
        aligned: 0/0 unaligned: 0/0
      Survivor-4 From:
        aligned: 0/0 unaligned: 0/0
      Survivor-4 To:
        aligned: 0/0 unaligned: 0/0
      Survivor-5 From:
        aligned: 0/0 unaligned: 0/0
      Survivor-5 To:
        aligned: 0/0 unaligned: 0/0
      Survivor-6 From:
        aligned: 0/0 unaligned: 0/0
      Survivor-6 To:
        aligned: 0/0 unaligned: 0/0
      Survivor-7 From:
        aligned: 0/0 unaligned: 0/0
      Survivor-7 To:
        aligned: 0/0 unaligned: 0/0
      Survivor-8 From:
        aligned: 0/0 unaligned: 0/0
      Survivor-8 To:
        aligned: 0/0 unaligned: 0/0
      Survivor-9 From:
        aligned: 0/0 unaligned: 0/0
      Survivor-9 To:
        aligned: 0/0 unaligned: 0/0
      Survivor-10 From:
        aligned: 0/0 unaligned: 0/0
      Survivor-10 To:
        aligned: 0/0 unaligned: 0/0
      Survivor-11 From:
        aligned: 0/0 unaligned: 0/0
      Survivor-11 To:
        aligned: 0/0 unaligned: 0/0
      Survivor-12 From:
        aligned: 0/0 unaligned: 0/0
      Survivor-12 To:
        aligned: 0/0 unaligned: 0/0
      Survivor-13 From:
        aligned: 0/0 unaligned: 0/0
      Survivor-13 To:
        aligned: 0/0 unaligned: 0/0
      Survivor-14 From:
        aligned: 0/0 unaligned: 0/0
      Survivor-14 To:
        aligned: 0/0 unaligned: 0/0
      Survivor-15 From:
        aligned: 0/0 unaligned: 0/0
      Survivor-15 To:
        aligned: 0/0 unaligned: 0/0
  Old generation:
    oldFromSpace:
      aligned: 0/0 unaligned: 0/0
    oldToSpace:
      aligned: 0/0 unaligned: 0/0

  Unused:
    aligned: 0/0

Fatal error: StackOverflowError: Enabling the yellow zone of the stack did not make any stack space available. Possible reasons for that: 1) A call from native code to Java code provided the wrong JNI environment or the wrong IsolateThread; 2) Frames of native code filled the stack, and now there is not even enough stack space left to throw a regular StackOverflowError; 3) An internal VM error occurred.
Actions #2

Updated by Martin Honnen almost 2 years ago

Some further investigation: as the app on azure runs Debian while I tried Ubuntu I have also tried to run WSL Linux Debian and Flask, but with the same result as long Flask uses its internal web server: the very first attempt to serve a HTTP request trying to use SaxonC HE doing XPath or XSLT or XQuery results in that crash.

Then I have investigated what a "production WSGI server" might be and how to use it and it turns out you can just do e.g. pip install gunicorn and then instead of running flask you run e.g. gunicorn -b 0.0.0.0:8000 app:app; and interestingly enough in the first few tests I have run now SaxonC doesn't crash the web server, it seems to run similar to the test app I installed on azure.

I still don't understand what makes the difference but that is at least a promising result, with a "production WSGI server" like that gunicorn you can run SaxonC HE Python with some responsiveness and stability and it looks as if without those dreaded "StackOverflowError: ... yellow zone .. crashes".

Actions #3

Updated by O'Neil Delpratt almost 2 years ago

Hi, I have reproduced this crash using flask. Now investigating

Actions #4

Updated by Martin Honnen almost 2 years ago

I am currently running my code with Python 3.9 and Flask 2.2 on macOS 13 with a Mac Mini M1 and interestingly enough here the development server Werkzeug/2.2.2 Python/3.9.6 seems to run fine and stable, no crashes so far.

So my experience so far:

Under Windows with the Flask development server, you are able to run a couple of requests using SaxonC to do XPath and/or XSLT and/or XQuery, but at some point it crashes the server.

Under Linux (at least the WSL editions I use) any request trying to use SaxonC with the development server crashes directly but if you switch to a "production server" like gunicorn all runs fine and stable. I have also for a test deployed to app to an Azure server with Linux and gunicorn and there it also runs stable.

On MacOS it looks as even the development server runs fine and stable.

Perhaps that helps anyone else trying to use SaxonC with Python in a Flask web app.

Actions #5

Updated by O'Neil Delpratt almost 2 years ago

  • Status changed from New to In Progress

I have finally managed to solve this bug issue. A call to the attachCurrentThread() method is required in the SaxonProcessor C++ constructor.

Actions #6

Updated by O'Neil Delpratt almost 2 years ago

I have been using FastAPI webserver for testing which is working as expected.

Actions #7

Updated by Alexis de Lattre almost 2 years ago

I'm impacted by this bug while working on SaxonC HE from python under Linux. Great news that you managed to solve the bug. How can be benefit from the fix? Should we wait for a new release (12.1) ?

Actions #8

Updated by O'Neil Delpratt almost 2 years ago

  • Status changed from In Progress to Resolved
  • % Done changed from 0 to 100
  • Found in version changed from 11.99 to 12.0

Bug issue fixed and committed. Available in the next maintenance release.

Actions #9

Updated by O'Neil Delpratt almost 2 years ago

  • Status changed from Resolved to Closed
  • Fixed in version set to 12.1

Bug fixed applied in the SaxonC 12.1 maintenance release.

Please register to edit this issue

Also available in: Atom PDF