Special Session on Non-traditional Learning Methods
Chair: Plamen Angelov
Traditionally, in (machine) learning a number of assumptions are made. For example, it is usually assumed that the observation data samples are fully independent of each other, that there are infinite number of observations, that statistical distributions of the data can be described by one or several (finite number, a mixture) of known smooth distributions, e.g. Gaussian, Poison, Cauchy, etc. Stationary is also often an implicit assumption. Last but not least, nearly any machine learning method implicitly assumes the structure of the model/system/classifier/predictor beforehand and keeps it fixed not only during the learning but also during the exploitation phase/period. In addition, issues such as drift and shift in the data streams are closely related to non-stationarity, but also to the structure of the model and its ability to not only adapt, but also evolve.
In this special session different topics will be covered including but not limited to anomaly/fault detection and identification, clustering and classification, control, etc. from the perspective of using non-traditional learning methods which break the above mentioned implicit assumptions and offer new ways of approaching the problems.