Details
Original language | English |
---|---|
Title of host publication | 2022 8th International Conference on Control, Automation and Robotics |
Subtitle of host publication | ICCAR 2022 |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
Pages | 388-393 |
Number of pages | 6 |
ISBN (electronic) | 9781665481168 |
ISBN (print) | 9781665481175 |
Publication status | Published - 2022 |
Event | 8th International Conference on Control, Automation and Robotics, ICCAR 2022 - Xiamen, China Duration: 8 Apr 2022 → 10 Apr 2022 |
Abstract
Factory planning can increase the productivity of manufacturing significantly, though the process is expensive when it comes to cost and time. In this paper, we propose an Unmanned Aerial Vehicle (UAV) framework that accelerates this process and decreases the costs. The framework consists of a UAV that is equipped with an IMU, a camera and a LiDAR sensor in order to navigate and explore unknown indoor environments. Thus, it is independent of GNSS and solely uses on-board sensors. The acquired data should enable a DRL agent to perform autonomous decision making, applying a reinforcement learning approach. We propose a simulation of this framework including several training and testing environments, that should be used for developing a DRL agent.
Keywords
- autonomous exploration, autonomous navigation, Deep Reinforcement Learning, factory planning, GNSS-denied environment, UAV
ASJC Scopus subject areas
- Mathematics(all)
- Control and Optimization
- Mathematics(all)
- Modelling and Simulation
- Computer Science(all)
- Artificial Intelligence
- Engineering(all)
- Computational Mechanics
- Engineering(all)
- Mechanical Engineering
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
2022 8th International Conference on Control, Automation and Robotics: ICCAR 2022. Institute of Electrical and Electronics Engineers Inc., 2022. p. 388-393.
Research output: Chapter in book/report/conference proceeding › Conference contribution › Research › peer review
}
TY - GEN
T1 - Deep Reinforcement Learning Based UAV for Indoor Navigation and Exploration in Unknown Environments
AU - Seel, Andreas
AU - Kreutzjans, Florian
AU - Kuster, Benjamin
AU - Stonis, Malte
AU - Overmeyer, Ludger
N1 - Funding Information: ACKNOWLEDGMENT This work is a part of the IGF project 21395 N of the Research Association for Intralogistics, Materials Handling and Logistic Systems (IFL) and it was funded via the German Federation of Industrial Research Associations (AiF) in the program of Industrial Collective Research (IGF) by the Federal Ministry for Economic Affairs and Energy (BMWi) based on a decision of the German Bundestag.
PY - 2022
Y1 - 2022
N2 - Factory planning can increase the productivity of manufacturing significantly, though the process is expensive when it comes to cost and time. In this paper, we propose an Unmanned Aerial Vehicle (UAV) framework that accelerates this process and decreases the costs. The framework consists of a UAV that is equipped with an IMU, a camera and a LiDAR sensor in order to navigate and explore unknown indoor environments. Thus, it is independent of GNSS and solely uses on-board sensors. The acquired data should enable a DRL agent to perform autonomous decision making, applying a reinforcement learning approach. We propose a simulation of this framework including several training and testing environments, that should be used for developing a DRL agent.
AB - Factory planning can increase the productivity of manufacturing significantly, though the process is expensive when it comes to cost and time. In this paper, we propose an Unmanned Aerial Vehicle (UAV) framework that accelerates this process and decreases the costs. The framework consists of a UAV that is equipped with an IMU, a camera and a LiDAR sensor in order to navigate and explore unknown indoor environments. Thus, it is independent of GNSS and solely uses on-board sensors. The acquired data should enable a DRL agent to perform autonomous decision making, applying a reinforcement learning approach. We propose a simulation of this framework including several training and testing environments, that should be used for developing a DRL agent.
KW - autonomous exploration
KW - autonomous navigation
KW - Deep Reinforcement Learning
KW - factory planning
KW - GNSS-denied environment
KW - UAV
UR - http://www.scopus.com/inward/record.url?scp=85132542361&partnerID=8YFLogxK
U2 - 10.1109/ICCAR55106.2022.9782602
DO - 10.1109/ICCAR55106.2022.9782602
M3 - Conference contribution
AN - SCOPUS:85132542361
SN - 9781665481175
SP - 388
EP - 393
BT - 2022 8th International Conference on Control, Automation and Robotics
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 8th International Conference on Control, Automation and Robotics, ICCAR 2022
Y2 - 8 April 2022 through 10 April 2022
ER -