Autoscale Python Celery with Amazon EC2
I have a Celery Task-Manager to crunch some numbers for company analytics.
The Task-Manager and workers are hosted on an Amazon EC2 Linux Server.
I need to set up the system such if we send too many tasks to celery Amazon automatically sets up a new EC2 instance to run more workers and balances the load across these workers.
The services that I'm aware exist are the Amazon Autoscale and Amazon Load balancing services which seem like exactly what I want to use however, I'm not sure what the best way to configure the Celery is.
I think that I ought to have a celery "master" which is collecting all the tasks and a number of celery workers which execute them. As the number of tasks increases I want to add more workers. The way the autoscale works (by taking an AMI of the celery server) I think that I'm currently cloning the Master as well as the workers which seems like not what I want to do.
How do I organise this to achieve my end goal which is flexible autoscaling task management using Celery to manage the tasks and Amazon Web Service to host the computing.
As much detail as possible in any answers (or links to tutorials!) would be greatly appreciated as most tutorials or advice seems to assume large quantities of knowledge which I don't currently have!
You do not need a master-worker architecture to get this to work. If I understand your question correctly, you want to be able to scale based on queue size. I would say it will be easier if you have the following steps
上一篇: 使用ZipArchive()添加空字节进行存档的奇怪行为
下一篇: 自动缩放Python芹菜与亚马逊EC2