Skip to content Skip to sidebar Skip to footer

Why Does Celery Retry, But My Job Did Not Fail?

I have a celery job to run MySQL databases, however, it always got Lock Wait Timeout. After digging into the databases queries, I realized that celery triggered another job after 1

Solution 1:

Depending on how you execute the sql query, here is what I would try. (1) since you have bind=True, the task should be the first parameter to your function. The convention in celery is to call that first parameter self. (2) You want to try and catch the database level exception that is occurring and ignore it.

from celery.utils.log import get_task_logger

log = get_task_logger(__name__)


@celery.task(bind=True, acks_late=True)defetl_pipeline(self, dev=dev, test=test):
    try:
        # try querying the database here using sqlalchemy or mysqlconnect??except Exception as ex:
        # for now, log the exception and type so that you can drill down into what is happening
        log.info('[etl_pipeline]  exception of type %s.%s: %s', ex.__class__.__module__, ex.__class__.__name__, ex)
        raise

The debugging that you will get from the logging should help you determine which error you are getting on the client side.

Post a Comment for "Why Does Celery Retry, But My Job Did Not Fail?"