What is it about?

NILM tries to decompose the energy consumption of buildings into its singular loads. It has been shown that the required machine learning models struggle to achieve satisfying results when they are transferred from the house they were trained on to a different house. In this study, we try to pinpoint the reasons for these difficulties to enable future research to take them into consideration.

Featured Image

Why is it important?

Understanding why models perform the way they are is a necessary step for effectively improving them. As there are several reasons in the application area of NILM why models could struggle with a new domain, it is especially important to know where to put the focus. We find that background noise, introduced by varying device constellations, is probably not the reason for performance drops. At the same time, we can confirm the intuition that a significantly different signature of devices largely impacts the model performance.

Perspectives

If we found out that the background noise introduced by device constellation was the main reason for performance drops of NILM models when being transferred to a new house without being retrained, the implications would have been far worse as they are now: Taking all possible device configurations into account during training stages would scale very bad considering the amount of devices usually present in modern households. On the flipside, finding that a vastly differing device signature is the main reason for models struggling to perform is very encouraging, as it implies that the struggles may be mitigated by expanding the existing training data per device (e.g., using a public repository of device signatures).

Justus Breyer

Read the Original

This page is a summary of: Investigating Domain Bias in NILM, October 2024, ACM (Association for Computing Machinery),
DOI: 10.1145/3671127.3699532.
You can read the full text:

Read

Resources

Contributors

The following have contributed to this page