Browsing by Author "Burguillo-Rial, Juan Carlos"
Now showing 1 - 4 of 4
Results Per Page
Sort Options
- Balancing Plug-In for Stream-Based ClassificationPublication . de Arriba-Pérez, Francisco; García-Méndez, Silvia; Leal, Fátima; Malheiro, Benedita; Burguillo-Rial, Juan CarlosThe latest technological advances drive the emergence of countless real-time data streams fed by users, sensors, and devices. These data sources can be mined with the help of predictive and classification techniques to support decision-making in fields like e-commerce, industry or health. In particular, stream-based classification is widely used to categorise incoming samples on the fly. However, the distribution of samples per class is often imbalanced, affecting the performance and fairness of machine learning models. To overcome this drawback, this paper proposes Bplug, a balancing plug-in for stream-based classification, to minimise the bias introduced by data imbalance. First, the plug-in determines the class imbalance degree and then synthesises data statistically through non-parametric kernel density estimation. The experiments, performed with real data from Wikivoyage and Metro of Porto, show that Bplug maintains inter-feature correlation and improves classification accuracy. Moreover, it works both online and offline.
- Explainable Classification of Wiki StreamsPublication . García-Méndez, Silvia; Leal, Fátima; de Arriba-Pérez, Francisco; Malheiro, Benedita; Burguillo-Rial, Juan CarlosWeb 2.0 platforms, like wikis and social networks, rely on crowdsourced data and, as such, are prone to data manipulation by ill-intended contributors. This research proposes the transparent identification of wiki manipulators through the classification of contributors as benevolent or malevolent humans or bots, together with the explanation of the attributed class labels. The system comprises: (i) stream-based data pre-processing; (ii) incremental profiling; and (iii) online classification, evaluation and explanation. Particularly, the system profiles contributors and contributions by combining features directly collected with content- and side-based engineered features. The experimental results obtained with a real data set collected from Wikivoyage – a popular travel wiki – attained a 98.52 % classification accuracy and 91.34 % macro F-measure. In the end, this work seeks to address data reliability to prevent information detrimental and manipulation.
- Interpretable Classification of Wiki-Review StreamsPublication . García-Méndez, Silvia; Leal, Fátima; Malheiro, Benedita; Burguillo-Rial, Juan CarlosWiki articles are created and maintained by a crowd of editors, producing a continuous stream of reviews. Reviews can take the form of additions, reverts, or both. This crowdsourcing model is exposed to manipulation since neither reviews nor editors are automatically screened and purged. To protect articles against vandalism or damage, the stream of reviews can be mined to classify reviews and profile editors in real-time. The goal of this work is to anticipate and explain which reviews to revert. This way, editors are informed why their edits will be reverted. The proposed method employs stream-based processing, updating the profiling and classification models on each incoming event. The profiling uses side and content-based features employing Natural Language Processing, and editor profiles are incrementally updated based on their reviews. Since the proposed method relies on self-explainable classification algorithms, it is possible to understand why a review has been classified as a revert or a non-revert. In addition, this work contributes an algorithm for generating synthetic data for class balancing, making the final classification fairer. The proposed online method was tested with a real data set from Wikivoyage, which was balanced through the aforementioned synthetic data generation. The results attained near-90% values for all evaluation metrics (accuracy, precision, recall, and F-measure).
- Simulation, modelling and classification of wiki contributors: Spotting the good, the bad, and the uglyPublication . García-Méndez, Silvia; Leal, Fátima; Malheiro, Benedita; Burguillo-Rial, Juan Carlos; Veloso, Bruno; Chis, Adriana E.; González–Vélez, HoracioData crowdsourcing is a data acquisition process where groups of voluntary contributors feed platforms with highly relevant data ranging from news, comments, and media to knowledge and classifications. It typically processes user-generated data streams to provide and refine popular services such as wikis, collaborative maps, e-commerce sites, and social networks. Nevertheless, this modus operandi raises severe concerns regarding ill-intentioned data manipulation in adversarial environments. This paper presents a simulation, modelling, and classification approach to automatically identify human and non-human (bots) as well as benign and malign contributors by using data fabrication to balance classes within experimental data sets, data stream modelling to build and update contributor profiles and, finally, autonomic data stream classification. By employing WikiVoyage – a free worldwide wiki travel guide open to contribution from the general public – as a testbed, our approach proves to significantly boost the confidence and quality of the classifier by using a class-balanced data stream, comprising both real and synthetic data. Our empirical results show that the proposed method distinguishes between benign and malign bots as well as human contributors with a classification accuracy of up to 92 %.