Preproject 2020

Julia Maria Graham

Title: Implementation of an intrusion system using Dynamic ModeDecomposition and Compressed Sensing

Abstract: Driven in particular by the development of Deep Learning methods, the past few years have brought major advancements in the field of Object Detection and Computer Vision. Although Deep Learning methods show impressive classification capabilities, they are in return highly computationally demanding. Furthermore, they exhibit a complexity that renders them difficult to dissect and fully understand, which is an unwanted trait in safety-critical applications.

Consequently, the need for alternative methods of conducting Object Detection that are less costly to train, more interpretable, while meeting the performance of Deep Learning models remains a burning question.

This report implements an Object Detection framework based on two fairly simple methods with sound mathematical foundations, namely Dynamic Mode Decomposition and Compressed Sensing. It is applied as an intrusion system implemented on low-cost hardware, with the aim of recognising faces from recorded videos. Results using a small data set of videos are presented, and the feasibility of extending the system to applications of larger-scale is discussed.

Vebjørn Malmin

Title: Model predictive control on piecewise affine neuralnetworks

Abstract: Learning algorithms and at the pinnacle neural network, have over the last decade revolutionized model creation and identification. Predictions from the neural network models are often state of the art. Unfortunately, the lack of formal mathematical proofs on properties such as stability, raises concerns of usage in safety-critical applications. To address these problems, a algorithm for converting some neural networks into their piecewise affine representation has been developed.

One of challenges with the alternative representation is the memory usage and general computational complexity. The transformation is highly dependent on the number of affine regions. This report investigates how to limit the number of regions of a network model through weight regularization. The model is trained on a rod pendulum, and the results are use to showcase feasibility of a model predictive controller.

Andrine Elsetrønning

Title: Analysis of Lung Sound Data for Abnormality Detection

Highlight: 1 Journal article submitted

Abstract: Lung sounds refer to the sound generated by air moving through the respiratory system. These sounds, as most biomedical signals, are non-linear and non-stationary. A vital part of using the lung sound for disease detection is discrimination between normal lung sound and abnormal lung sound. In this specialization project, several approaches for classifying between no-crackle and crackle lung sounds are explored. Decomposition methods such as EMD, EEMD, and DWT are used along with several feature extraction techniques, to explore how various classifiers perform for the given task.

An open-source dataset downloaded from Kaggle, containing chest auscultation of varying quality is used to determine the results of using the different decomposition and feature extraction combinations. This report shows that the combination of features and classifier, obtaining the most accurate results is when higher-order statistical- and spectral-features, along with Mel-frequency cepstral coefficients are extracted directly from the breathing cycles, while a $k$-NN classifier is used to classify. The proposed approach gave an accuracy of 84.38\% .

Maria Skatvedt

Title: Deep Feed-Forward Neural Network Models forBottom-detection in Doppler Velocity Loggers

Abstract: In the growing field of Autonomous Underwater Vehicles, as for other subsea applications, it is essential to provide accurate positional estimates due to the unavailability of external satellite references. Utilizing a Doppler Velocity Logger to measure an object’s velocity enables correction of any time-varying bias introduced in positional estimate integration processes. This further reduces the positional errors that would otherwise increase quadratically over time. This correction relies on the Doppler Velocity Loggers’ high accuracy, which in turn requires a reliable and robust bottom-tracking algorithm. The algorithm is to detect the sample-window stemming from the bottom, as these samples are to be evaluated for Doppler shift, utilizing the Doppler effect to perform velocity estimates. Considering all variations in the received amplitude-signals requires an increasingly more complex heuristic algorithm that would demand a proportionally high computational power. A heuristic algorithm would also require knowledge of each variation.

The objective was to explore a Machine Learning approach to predict the sample-windows in the Doppler Velocity Loggers amplitude-signals to create a generalized tool for bottom-detection and providing a high accuracy model. Two neural network models, a 1D-Convolutional Neural Network and a Multi-Layer Perceptron, were designed and trained on performing classification and regression, respectively. The former predicts bottom and non-bottom classes in a single ping, while the latter predicts the sample-window integer-interval. The raw training data consisted of 29000 received and processed acoustic pings for each of the 4 transducers of a Doppler Velocity Logger, collected by an external company, Nortek, in the Inner Oslo fjord. This amplitude-data was labeled by Norteks current heuristic algorithm to perform a supervised learning approach to train the networks

Upon evaluation of predictions made on the test-set, both neural networks were able to predict the sample-windows with an accuracy of over 99\%. The F1-score were over 95\% on the cleaned test set with a Mean Absolute Error of 0.42\% and 0.29\% for the MLP and the 1D-CNN, respectively. Qualitative visualization of predictions substantiated the high accuracy, as they had a strong correlation with the original bottom-tracking labels. However, the 1D-CNN appeared to perform better on pings missed by the original algorithm than the MLP. In general, the 1D-CNN model also predicted with higher accuracy and lower loss than the MLP, but the latter had a 95.67 \% lower prediction-time, which is advantageous. The high accuracy of the models and their good performance on previously missed pings substantiates the conclusion that it is possible to perform bottom-detection on the Doppler Velocity Loggers amplitude-data using trained neural networks. The models also predicted the bottom where the original algorithm could not, which indicates that the models may outperform the original bottom-tracking algorithm

Fredrik Pedersen

Title: Comparing Physics-Based andData-Driven Approaches for ModelingOne-Dimensional Heat Conduction

Abstract: Our understanding of the world around us is constantly improving, and today it is possible to accurately describe complex dynamic processes and systems with equations. However, using purely physics-based models and numerical methods with high resolutions for simulations and predictions can be both extremely computationally expensive and economically expensive. They are therefore not always suited for use in real-time or for doing predictions in the near future. It would be very valuable if we could perform lower-fidelity calculations and exploit recent advancements in the fields of artificial intelligence and machine learning to improve these results. Such improvements can include discovering and accounting for unknown physics and recovering information lost from simplifications or dimensionality reductions.

A large part of this specialization project was to understand the underlying physics and equations of heat conduction and how it could be transposed to a format that was possible to simulate in Python. Simulations of different resolutions were then run to decide the number of nodes for a low-fidelity model and for a high-fidelity model based on computation time and mean squared error (MSE). A larger dataset was created by doing several simulations with different boundary conditions. A deep neural network was used in an attempt to improve the performance of the low-fidelity model, but no significant improvement was found even when using a larger dataset. This paper is also a discussion and demonstration of the limitations of physics-based modeling and data-driven modeling. Furthermore, it takes on the rationale for combining these two approaches in hybrid analysis and modeling (HAM) and how this can be used in my upcoming Master’s thesis.

Hanna Malm

Title: Concept of Digital Twin for Business Enterprise

Abstract: An insight into the value of Digital Twins for Business Enterprises and the challenges in creating them. To create a Digital Twin for a Business Enterprises a dynamic model, consisting of several dynamic sub-model of business-critical processes, has to be created. A dynamic model framework is presented and discussed how this model can be used to create a digital twin that creates values for the Enterprise. Challenges connected to this model and suggestion to further work to create a digital twin is also discussed. Suggestion to what type of data-measures that are needed is show, as well as suggestions to methods of collecting and analysing the data are mentioned.

Vilje Ness

Title: Concept of Digital Twin for Business Enterprise

Abstract: An insight into the value of Digital Twins for Business Enterprises and the challenges in creating them. To create a Digital Twin for a Business Enterprises a dynamic model, consisting of several dynamic sub-model of business-critical processes, has to be created. A dynamic model framework is presented and discussed how this model can be used to create a digital twin that creates values for the Enterprise. Challenges connected to this model and suggestion to further work to create a digital twin is also discussed. Suggestion to what type of data-measures that are needed is show, as well as suggestions to methods of collecting and analysing the data are mentioned.

Olav Pedersen

Title:

Abstract:

Kari Moe

Title: Evaluating Machine Learning towards acost-effective 3D CAD modelling

Abstract: Smartphones have evolved to become a powerful, available and daily used device all over the world. This study, in cooperation with the Norwegian company Hy5 AS, contributes to the research of accurate 3D scans by utilizing a low-cost solution with smartphone images. Three dimensional CAD models are obtained using photogrammetry, an image-based 3D reconstruction method. By measuring the error between 3D meshes, we show that computer-based software results in superior accuracy compared to 3D scanning apps with immediate feedback. Also, as the image acquisitions from the smartphones may introduce both noise and coarse-scale images, the impact of image resolution is established to be significant in the reconstruction process. By taking advantage of the sub-field of artificial intelligence, machine learning, and state-of-the-art methods for producing fine-scaled images with ESRGANs, the image resolution can be increased, resulting in more accurate 3D models. As high-resolution equipment may not be available in low-income countries, this report is a step in the direction of developing low-cost three dimensional CAD models of the socket interface between a residual limb and prosthetic device.

Ole Jørgen Hannestad

Title:

Abstract:

Raja Iqran

Title:

Abstract:

Torkel Laache

Title: Performance of Deep Reinforcement Learning Algorithms ina Continuous Control Task

Abstract: Deep Learning (DL) has achieved revolutionary results in various fields of study, opening up possibilities for automating complex control tasks such as autonomous vehicles. Similar to how humans learn by trial and error, Deep Reinforcement Learning (DRL) can learn complex control policies with no a priori knowledge, only from observations of the environment and a reward function. Several DRL algorithms are capable of path following, but a fully autonomous vehicle must also handle reactive collision avoidance, increasing the task’s complexity.

\noindent This paper compares several DRL algorithms for controlling a marine vessel performing path following while avoiding static and dynamic obstacles perceived by rangefinder sensors. For this, the algorithms DDPG, TD3, PPO, A2C, and ACKTR were chosen based on their performance on similar tasks. With a simulator based on the OpenAI Gym toolkit, training and evaluation of the algorithms are performed in demanding, stochastically generated environments. Only PPO, A2C, and ACKTR could reach satisfactory results during training, while DDPG and TD3 performed poorly in comparison. Out of the three, PPO stood out due to its short training time, achieving similar results to the more computational complex algorithms A2C and ACKTR.

Halvor Teigen

Title: Investigating performance of Deep Reinforcement Learningalgorithms for Path-Following and Collision Avoidance inAutonomous Vessels

Abstract: In this project, we explore various Deep Reinforcement Learning algorithms and investigate their performance in the application of path-following and obstacle-avoidance for autonomous vessels. This is done through training of multiple agents for each algorithm as well as extensive generalization testing. A custom \textit{performance function} is developed to create a quantitative measure of performance for comparison of the selected algorithms. A \textit{usability function} that takes both performance and training time into consideration is also developed.

The results show that PPO and ACKTR clearly outperform the other algorithms, both in terms of performance on the training scenario and generalization performance in real-world scenarios. PPO stands out in terms of usability and outperforms all algorithms across all tests. This proves PPO to be the preferred algorithm in this application.

Sindre Stenen Blakseth

Title: Machine Learning-Based Correction Methods forNumerical Solutions of the One-Dimensional HeatEquation

Abstract: