Data-driven electric motor control
The utilization of modern electric motors is subject to a multitude of demands that need to be considered within the design of the control scheme. While safe and low-wear operation has the highest priority, various performance requirements and efficiency specifications need to be matched precisely to the concerning filed of application. Hence, the controller design process requires advanced expert knowledge about the available motor systems, and the results of this design process highly depend on the available information about the specific motor behavior and the ability of the associated control engineers to use this knowledge. The emergence of modern, optimization-based control methods like model predictive control add another bottleneck to the controller design, because the performance of such algorithms is strongly correlated to the available online capability of computation hardware and, thus, a compromise between hardware costs and performance losses has to be accepted oftentimes.
In contrast to the standard approach of model-driven controller design procedures, a data-driven approach allows to capture the behavior of a motor system from direct interaction with the plant. The methods of reinforcement learning, an area of machine learning that is located in between supervised and unsupervised learning, enable an efficient processing of the incoming measurements, and tune the controller such that the motor is optimally utilized with respect to the measurable events. This approach incorporates any parasitic behavior inherently, without the need of a parameterized mathematical drive model, and can therefore compensate unwanted effects without comprehensive physical analysis. Moreover, the resulting control algorithm can be formulated as an artificial neural network, which is an explicit mathematical relation and, thus, has much lower demands in terms of computational online effort than implicit policies like model-predictive controllers.
The aspirations in this field target a reliable and universally usable drive controller design process without the requirement of additional knowledge from the user side. In terms of ongoing research, the gym-electric-motor (GEM) Python toolbox [1][2] for easily accessible motor simulation has been published in order to accelerate the development and validation of corresponding reinforcement learning controllers. The research results obtained so far include a simulative proof-of-concept application of the current control in a permanent magnet synchronous motor drive [3] which validates the basic design strategy and underlines the potential of data-driven controller design.
Especially the assurance of control-loop stability and the construction of optimal training scenarios are still unsolved problems that need further examination to make the data-driven motor control approach a competitive alternative to classical methods. Furthermore, the planned research in this domain involves the investigation of learning from actual plant data in test bench environments, the holistic design process of combined observer-controller algorithms for partially measurable motor applications and the development of data-efficient learning toolchains that can be deployed to rapid control prototyping hardware.
Publications
[1] https://github.com/upb-lea/gym-electric-motor
[2] A. Traue, G. Book, W. Kirchgässner and O. Wallscheid, "Toward a Reinforcement Learning Environment Toolbox for Intelligent Electric Motor Control," in IEEE Transactions on Neural Networks and Learning Systems, 2020
[3] M. Schenke, W. Kirchgässner and O. Wallscheid, "Controller Design for Electrical Drives by Deep Reinforcement Learning: A Proof of Concept," in IEEE Transactions on Industrial Informatics, vol. 16, no. 7, pp. 4650-4658, 2020