Can Reinforcement Learning solve the Human Allocation Problem?
In recent years, reinforcement learning (RL) has emerged as a new, promising way to solve old problems. The algorithms’ role in finding approximate solutions in NP-hard complex- ity became crucial for developing modern intelligent deci- sions. In this paper, we consider the human resource alloca- tion problem, which is one of classical NP-hard complexity problem, with the fixed number of workers and tasks, using various RL methods, such as deep contextual bandit, double deep Q-learning network (DDQN), and a combined approach of DDQN with the Monte Carlo Tree Search (MCTS). The deep contextual bandit algorithm showed good performance for the low dimensional cases, but it is inappropriate for real problems in the chosen setting. To overcome such barriers, we decomposed the task in terms of Markov decision processes and reduced the action space so that all sequential RL meth- ods became available. In this paper, we proposed a way to deal with NP-hard problems via modern RL approaches and compare different methods’ performance. We studied differ- ent algorithms in the RL family, namely Contextual Bandit, DDQN, DDQN with MCTS. Those algorithms worked with appropriate settings and improved at least 30% in time reduc- tion compared to random assignment.
Research paper: Can Reinforcement Learning solve the Human Allocation Problem?
Phong Nguyen, Matsuba Hiroya, Tejdeep Hunabad, Dmitrii Zhilenkov, Hung Nguyen, Khang Nguyen
3/21/2025
Efficient and Concise Explanations for Object Detection with Gaussian-Class Activation Mapping Explainer.jpg
NEW
Efficient and Concise Explanations for Object Detection with Gaussian-Class Activation Mapping Explainer
To address the challenges of providing quick and plausible explanations in Explainable AI (XAI) for object detection models, we introduce the Gaussian Class Activation Mapping Explainer (G-CAME).
Enhancing the Fairness and Performance of Edge Cameras with Explainable AI.avif
NEW
Enhancing the Fairness and Performance of Edge Cameras with Explainable AI
The rising use of Artificial Intelligence (AI) in human detection on Edge camera systems has led to accurate but complex models, challenging to interpret and debug.
LangXAI Integrating Large Vision Models for Generating Textual Explanations to Enhance Explainability in Visual Perception Tasks.png
NEW
LangXAI: Integrating Large Vision Models for Generating Textual Explanations to Enhance Explainability in Visual Perception Tasks
LangXAI is a framework that integrates Explain able Artificial Intelligence (XAI) with advanced vi sion models to generate textual explanations for visual recognition tasks.
QaiDora Products
Trusted by
Contact us
Copyright by qaidora.com