Reinforcement Learning Based Decision Support Tool For Epidemic Control

Contenu principal de l'article

Mohamed-Amine Chadi , Hajar Mousannif


Rationale: Covid-19 Is Certainly One Of The Worst Pandemics Ever. In The Absence Of A Vaccine, Classical Epidemiological Measures Such As Testing In Order To Isolate The Infected People, Quarantine And Social Distancing Are Ways To Reduce The Growing Speed Of New Infections As Much As Possible And As Soon As Possible, But With A Cost To Economic And Social Disruption. It Is Therefore A Challenge To Implement Timely And Appropriate Public Health Interventions. Objective: This Study Investigates A Reinforcement Learning Based Approach To Incrementally Learn How Much Intensity Of Each Public Health Intervention Should Be Applied At Each Period In A Given Region. Methods: First We Define The Basic Components Of A Reinforcement Learning (Rl) Set Up (I.E., States, Reward, Actions, And Transition Function), This Represents The Learning Environment For The Agent (I.E., An Ai-Model). Then We Train Our Agent Using Rl In An Online Fashion, Using A Reinforcement Learning Algorithm Known As Reinforce.  Finally, A Developed Flow Network, Serving As An Epidemiological Model Is Used To Visualize The Results Of The Decisions Taken By The Agent Given Different Epidemic And Demographic State Scenarios. Main Results: After A Relatively Short Period Of Training, The Agent Starts Taking Reasonable Actions Allowing A Balance Between The Public Health And Economic Considerations. In Order To Test The Developed Tool, We Ran The Rl-Agent On Different Regions (Demographic Scale) And Recorded The Output Policy Which Was Still Consistent With The Training Performance. The Flow Network Used To Visualize The Results Of The Simulation Is Considerably Useful Since It Shows A High Correlation Between The Simulated Results And The Real Case Scenarios. Conclusion: This Work Shows That Reinforcement Learning Paradigm Can Be Used To Learn Public Health Policies In Complex Epidemiological Models. Moreover, Through This Experiment, We Demonstrate That The Developed Model Can Be Very Useful If Fed In With Real Data. Future Work: When Treating Trade-Off Problems (Balance Between Two Goals) Like Here, Engineering A Good Reward (That Encapsulates All Goals) Can Be Difficult, Therefore Future Work Might Tackle This Problem By Investigating Other Techniques Such As Inverse Reinforcement Learning And Human-In-The-Loop. Also, Regarding The Developed Epidemiological Model, We Aim To Gather Proper Real Data That Can Be Used To Make The Training Environment More Realistic, As Well As To Apply It For Network Of Regions Instead Of A Single Region.

Renseignements sur l'article