Describe the bug
Client.null_logger() calls addHandler(NullHandler()) on the shared "null_logger" logger every time a Client is instantiated without an explicit logger argument. The NullHandler instances accumulate indefinitely in logger.handlers (and in the global logging._handlerList weakref set), each bringing its own threading.RLock. In long-lived processes that create many Client instances (background workers, batch jobs, servers handling webhooks), this causes unbounded memory growth.
Steps to reproduce
import logging
from paddle_billing import Client
for _ in range(10_000):
Client("fake_key")
print(len(logging.getLogger("null_logger").handlers))
# -> 10000
RSS grows ~5 MB per 10k Client instantiations, retained forever (GC cannot collect handlers because they are live-referenced from logger.handlers).
Expected behavior
Creating multiple Client instances should not permanently attach new handlers to a shared logger. The null logger should either:
be configured once at module import, or
guard addHandler with if not null_logger.hasHandlers().
Code snippets
Python version
Python 3.12
SDK version
paddle-python-sdk: 1.10.0
API version
Paddle Version 1
Additional context
Root cause
paddle_billing/Client.py:98-107:
@staticmethod
def null_logger() -> Logger:
null_logger = getLogger("null_logger")
null_logger.addHandler(NullHandler()) # runs on every call
return null_logger
Called unconditionally from Client.init:
self.log = logger if logger else Client.null_logger()
as a workaround I did this:
Pass a pre-configured logger explicitly to avoid hitting null_logger():
import logging
_PADDLE_LOGGER = logging.getLogger("paddle_billing_client")
client = Client(api_key, logger=_PADDLE_LOGGER)
Describe the bug
Client.null_logger() calls addHandler(NullHandler()) on the shared "null_logger" logger every time a Client is instantiated without an explicit logger argument. The NullHandler instances accumulate indefinitely in logger.handlers (and in the global logging._handlerList weakref set), each bringing its own threading.RLock. In long-lived processes that create many Client instances (background workers, batch jobs, servers handling webhooks), this causes unbounded memory growth.
Steps to reproduce
RSS grows ~5 MB per 10k Client instantiations, retained forever (GC cannot collect handlers because they are live-referenced from logger.handlers).
Expected behavior
Creating multiple
Clientinstances should not permanently attach new handlers to a shared logger. The null logger should either:be configured once at module import, or
guard
addHandlerwithif not null_logger.hasHandlers().Code snippets
Python version
Python 3.12
SDK version
paddle-python-sdk: 1.10.0
API version
Paddle Version 1
Additional context
Root cause
paddle_billing/Client.py:98-107:
@staticmethod
def null_logger() -> Logger:
null_logger = getLogger("null_logger")
null_logger.addHandler(NullHandler()) # runs on every call
return null_logger
Called unconditionally from Client.init:
self.log = logger if logger else Client.null_logger()
as a workaround I did this:
Pass a pre-configured logger explicitly to avoid hitting null_logger():
import logging
_PADDLE_LOGGER = logging.getLogger("paddle_billing_client")
client = Client(api_key, logger=_PADDLE_LOGGER)