

Sql_alchemy_conn | | /opt/data_workflows/dags Task_logging_handler | _task_handler.FileTaskHandler
#WE HAPPY FEW UPDATE SCHEDULE CODE#
I agree to follow this project's Code of Conduct.But I expect to be able to do some parallel execution /w LocalExecutor. When the long-running task finishes, the other tasks resume normally. This occurs every time consistently, also on 2.1.2 I run a command with bashoperator (I use it because I have python, C, and rust programs being scheduled by airflow).īash_command='umask 002 & cd /opt/my_code/ & /opt/my_code/venv/bin/python -m path.to.my.python.namespace'Īirflow_database_engine_collation_for_ids:Īirflow_dags_are_paused_at_creation: TrueĪirflow_plugins_folder: "/plugins" I expect my long running BashOperator task to run, but for airflow to have the resources to run other tasks without getting blocked like this. We're only using 3% CPU and 2 GB of memory (out of 64 GB) but the scheduler is unable to run any other simple task at the same time.Ĭurrently only the long task is running, everything else is queued, even thought we have more resources:

I run a single BashOperator (for a long running task, we have to download data for 8+ hours initially to download from the rate-limited data source API, then download more each day in small increments). Installed with Virtualenv / ansible - What happened Virtualenv installation Deployment details Linux / Ubuntu Server Versions of Apache Airflow ProvidersĪpache-airflow-providers-postgres=2.3.0 Deployment
