keyboard_arrow_up
Accepted Papers
Fact or Artifact? Revise Layer-wise Relevance Propagation on Various Ann Architectures

Marco Landt-Hayen1,2, Willi Rath2, Martin Claus1,2, and Peer Kroger1, 1Christian-Albrechts-Universit¨at zu Kiel, Germany, 2GEOMAR Helmholtz Centre for Ocean Research, Germany

ABSTRACT

Layer-wise relevance propagation (LRP) is a widely used and powerful technique to reveal insights into various artificial neural network (ANN) architectures. LRP is often used in the context of image classification. The aim is to understand, which parts of the input sample have highest relevance and hence most influence on the model prediction. Relevance can be traced back through the network to attribute a certain score to each input pixel. Relevance scores are then combined and displayed as heat maps and give humans an intuitive visual understanding of classification models. Opening the black box to understand the classification engine in great detail is essential for domain experts to gain trust in ANN models. However, there are pitfalls in terms of model-inherent artifacts included in the obtained relevance maps, that can easily be missed. But for a valid interpretation, these artifacts must not be ignored. Here, we apply and revise LRP on various ANN architectures trained as classifiers on geospatial and synthetic data. Depending on the network architecture, we show techniques to control model focus and give guidance to improve the quality of obtained relevance maps to separate facts from artifacts.

KEYWORDS

Artificial Neural Networks, Image Classification, Layer-wise Relevance Propagation, Geospatial Data, Explainable AI.


Tests and Linguistic Platforms: Al Erfaan Proficiency Test as a Model

Nisrine El Hannach1 and Ali Boulaalam2, 1Department of English, Mohamed 1st University, Oujda, Morocco, 2Moulay Ismail University, Meknes, Morocco

ABSTRACT

This research revolves around the utilization of digitization or digital linguistic platforms in creating effective language tests capable of assessing learners proficiency in their first or second language. This is achieved by measuring the learners actual level in the four language skills: listening, reading, speaking, and writing. The attainment of these skills relies on a linguistic platform that employs linguistic algorithms to describe and prepare linguistic material. It also incorporates computer algorithms to construct algorithms in the form of local patterns that enable the automatic reading and utilization of linguistic algorithms. Additionally, the research introduces and defines the proficiency test developed by the Erfaan institute, an electronic test primarily designed to measure the skills of Arabic learners who are not native speakers, particularly those in advanced stages of learning Arabic (C1-2 level according to the Common European Framework), equivalent to the high school level. This test addresses the gap in assessing the skills of non-native Arabic learners and aims to elevate Arabic to the status of global languages with standardized measures for assessing the competencies of non-native speakers.

KEYWORDS

Skills, Linguistic Algorithms, Computer Algorithms, Platforms, Measurement.


Tolstoy’s Genius Explored by Deep Learning Using Transformer Architecture

Shahriyar Guliyev ,1Department of Electronics and Information Technologies, Nakhchivan State University, Nakhchivan, Azerbaijan

ABSTRACT

Artificial Narrow Intelligence is in the phase of moving towards the AGN, which will attempt to decide as a human being. We are getting closer to it by each day, but AI actually is indefinite to many, although it is no different than any other set of mathematically defined computer operations in its core. Generating new data from a pre-trained model introduces new challenges to science & technology. In this work, the design of such an architecture from scratch, solving problems, and introducing alternative approaches are what has been conducted. Using a deep thinker, Tolstoy, as an object of study is a source of motivation for the entire research.

KEYWORDS

AI, ML, ANN, artificial neurons, DL, NLP, NLG, Transformer, Generative Pre-trained Transformer, Tolstoy, Computational Linguistics, Social Sciences, Neural Information Processing, Human Language Technologies.


Developing a Multidimensional Fuzzy Deep Learning for Cancer Classification Using Gene Expression Data

Mahmood Khalsan1,3, Mu Mu1, Eman Salih Al-shamery, Suraj Ajit1, Lee Machado2, and Michael Opoku Agyeman1, 1dvanced Technology Research Group, Faculty of Arts, Science and Technology, The University of Northampton, UK, 2Centre for Physical Activity and Life Science, Faculty of Arts, Science, and Technology, The University of Northampton, UK, 3Computer Science Department, University of Babylon, College of Information

ABSTRACT

In the realm of cancer research, the identification of biomarker genes plays a pivotal role in accurate classification and diagnosis. This study delves into the intersection of machine learning and gene selection to enhance the precision of biomarker identification for cancer classification. Leveraging advanced computational techniques. In the quest for improved cancer classification, studies face challenges due to high-dimensional gene expression data and limited gene relevance. To address these challenges, we developed a novel multidimensional fuzzy deep learning (MFDL) to select subset of significant genes and using those genes to train the model for better accuracy. MFDL is exploring the integration of fuzzy concepts within filter and wrapper methods to select significant genes and applying a fuzzy classifier to improve cancer classification accuracy. Through rigorous experimentation and validation, six gene expression data used, the findings demonstrated the efficacy of our methodology on diverse cancer datasets. The results underscore the importance of integrative computational methods in deciphering the intricate genomic landscape of cancer and spotlight the potential for improved diagnostic accuracy. The developed model showcased outstanding performance across the six employed datasets, demonstrating an average accuracy of 98%, precision of 98.3%, recall of 97.6%, and an f1-score of 97.8%.

KEYWORDS

Deep learning, Gene selection, Cancer classification , Gene expression.


Pursuit-evasion Game Modelling in a Graph Using Petri Nets

Adel Djellal1, Hichem Mayache1, and Rabah Lakel2, 1Department of Electronics, Electrotechnics and Automation, National Higher School of Engineering and Technology, Annaba, Algeria, 2Department of Electronics, Badji Mokhtar University, Annaba, Algeria

ABSTRACT

Pursuit-Evasion is one of the most used interpretations in game theory, it supposes that we have a finite environment, a finite number of pursuers, and a finite number of evaders. Most of researches are proposing techniques to model and solve the problem with minimum number of pursuers. In this paper, a novel technique to model Pursuit-Evasion search technique in a graph using Petri net. The environment, after being modelled as a bidirectional graph, is converted into a Petri net with a certain number of places presenting the behaviour of each area. Petri nets, with its variants is a very powerful modelling tool for system with finite state space. The model of each area is detailed and the final net is the combination of the sub-nets for each area of the environment. The proposed technique can be used to validate any searching technique in graph-based pursuit-evasion model.

KEYWORDS

Pursuit-Evasion; Petri Net; Graph Theory.



Integrating Big Data and Data Mining Into Computer Science Education: Impacts and Strategies

LEE Ka-wai, Faculty of Education, the University of Hong Kong

ABSTRACT

This research paper explores the integration of big data analytics, data mining techniques, and database management into computer science education. It aims to understand how these technologies can enhance learning experiences and prepare students for a data-driven world. The study employs a mixed-methods approach, combining surveys, interviews, and data extraction from educational databases. The primary objectives include analyzing current trends in big data and data mining integration in computer science curricula, evaluating the effectiveness of these integrations in enhancing student learning outcomes, and proposing curriculum development recommendations. The literature review highlights the transformative impact of big data and the role of database management in educational contexts, emphasizing the need for innovative approaches and interdisciplinary strategies. The findings suggest that integrating big data and data mining positively impacts student engagement and learning outcomes, yet highlights challenges like resource constraints and educator training needs. The study concludes with the necessity of evolving curricula to meet the demands of the digital age and suggests areas for future research, including long-term impact studies and educator-focused research. This paper contributes to the understanding of how big data and data mining can be effectively incorporated into computer science education, highlighting critical areas for development in educational practices and policies.

KEYWORDS

Big Data Analytics, Data Mining Techniques, Computer Science Education, Curriculum Development, Educational Technology.


Evaluation of Flipped Classroom Environment With "Zoomrbt" Android Application Using Fuzzy Delphi Method

Noor Izwan Nasir and Marina Ibrahim Mukhtar, Faculty of Technical and Vocational Education, Universiti Tun Hussein Onn, Malaysia.

ABSTRACT

The model of the flipped learning environment was modified for this study. The development of the "ZOOMRBT" Android application and the use of the Fuzzy Delphi Method to get an expert consensus were all part of the approach for this study. Examining the needs of the flipped classroom setting and obtaining the endorsement of three instructors who are professionals in design and technology comprised the first two parts of the project. Applications are gathered in the second stage based on how well they fit into lesson plans, instructional videos, syllabi, project videos, and social media sharing. The Fuzzy Delphi Method is used in the third step to evaluate the model applications effectiveness. According to the objectives of the Malaysian Ministry of Educations (MOE) Education Transformation initiative in the Malaysian Education Development Plan (PPPM) 2013 2025, the findings provide a model for future learning and to build a more student centered learning environment.

KEYWORDS

Evaluation, Flipped Classroom Environment, ZOOMRBT Apps , Fuzzy Delphi Method.


Fact or Artifact? Revise Layer-wise Relevance Propagation on Various Ann Architectures

Marco Landt-Hayen1,2, Willi Rath2, Martin Claus1,2, and Peer Kroger1, 1Christian-Albrechts-Universit¨at zu Kiel, Germany, 2GEOMAR Helmholtz Centre for Ocean Research, Germany

ABSTRACT

Layer-wise relevance propagation (LRP) is a widely used and powerful technique to reveal insights into various artificial neural network (ANN) architectures. LRP is often used in the context of image classification. The aim is to understand, which parts of the input sample have highest relevance and hence most influence on the model prediction. Relevance can be traced back through the network to attribute a certain score to each input pixel. Relevance scores are then combined and displayed as heat maps and give humans an intuitive visual understanding of classification models. Opening the black box to understand the classification engine in great detail is essential for domain experts to gain trust in ANN models. However, there are pitfalls in terms of model-inherent artifacts included in the obtained relevance maps, that can easily be missed. But for a valid interpretation, these artifacts must not be ignored. Here, we apply and revise LRP on various ANN architectures trained as classifiers on geospatial and synthetic data. Depending on the network architecture, we show techniques to control model focus and give guidance to improve the quality of obtained relevance maps to separate facts from artifacts.

KEYWORDS

Artificial Neural Networks, Image Classification, Layer-wise Relevance Propagation, Geospatial Data, Explainable AI.


Emoji-based Features and Textual-based Features for Arabic Tweets Sentiment Analysis: a Review Study

Manar Alfreihat, Omar Saad Almousa, Yahya Tashtoush, Computer Science Dept., Jordan University of Science and Technology, Irbid, Jordan

ABSTRACT

We conducted a rigorous search strategy that involved searching reputable databases, including IEEE Xplore Digital Library, dblp computer science bibliography, springer, and Google Scholar, for peer-reviewed, original English language papers published between 1990 and early 2021. A total of 203 studies were initially identified, and after screening for eligibility, 174 articles were excluded for different causes, leaving 29 articles for inclusion in the review. The study considered various sentiment analysis approaches, including the Lexicon-Based Model, Emoticon Space Model, Emoji Interpretations, Deep Learning and ML Classifiers, TF-IDF, and Arabic Sentiment Analysis. The Lexicon-Based Model was found to increase the accuracy of the lexicon-based approach, with encouraging results reported for Arabic sentiment analysis. Additionally, studies have highlighted the usefulness of lexicons in sentiment analysis detection. The Emoticon Space Model was found to effectively leverage emoticon signals, outperforming previous advanced strategies and standard best runs.

KEYWORDS

Emoji Sentiment Lexicon, Emoji-Based Features, Textual-Based Features, Sentiment Analysis, Emoji-Based Features Arabic.


Mixed-distil-bert: Code-mixed Language Modeling for Bangla, English, and Hindi

Md Nishat Raihan, Dhiman Goswami, and Antara Mahmud, George Mason university Fairfax, Virginia, USA

ABSTRACT

One of the most popular downstream tasks in the field of Natural Language Processing is text classification. Text classification tasks have become more daunting when the texts are code-mixed. Though they are not exposed to such text during pre-training, different BERT models have demonstrated success in tackling Code-Mixed NLP challenges. Again, in order to enhance their performance, Code-Mixed NLP models have depended on combining synthetic data with real-world data. It is crucial to understand how the BERT models’ performance is impacted when they are pretrained using corresponding code-mixed languages. In this paper, we introduce Tri-Distil-BERT, a multilingual model pre-trained on Bangla, English, and Hindi, and Mixed-Distil-BERT, a model fine-tuned on code-mixed data. Both models are evaluated across multiple NLP tasks and demonstrate competitive performance against larger models like mBERT and XLM-R. Our two-tiered pre-training approach offers efficient alternatives for multilingual and code-mixed language understanding, contributing to advancements in the field. Both models are available at huggingface.


Roughness of Fossil From Periodically Correlated Images

Rachid Sabre, Laboratory Biogéosciences CNRS, University of Burgundy/Institut Agro Dijon, France

ABSTRACT

The roughness descriptor is a parameter for characterizing the structure of the surface of the object studied. The digital images of the surfaces are used to provide roughness descriptors using texture analysis, which is part of image processing. Roughness descriptors from images are mainly used when the surface studied is deformable or damaged by measurements made by roughness measuring devices. In this paper, we propose a new image roughness descriptor adapted to surfaces containing certain periodicities called a periodically correlated signal. The proposed descriptor is based on the estimation of the spectral density of correlated periodic processes. To compare our descriptor to other descriptors existing in the literature, we measured the sensitivity of the descriptors to random Gaussian noise added to each image. In this work, images of fossils representing a certain periodicity are the subject of the images studied. This comparison shows that the proposed descriptor is more sensitive than the other descriptors for this type of images (the sensitivity value is higher). Thus, the novelty of this paper concerns the proposal of a descriptor adapted to images with periodicities. The idea is to develop adapted methods taking into account different surface structures to have descriptors that are more precise.

KEYWORDS

Periodically correlated, roughness image, spectral density.


Improving Few-shot Image Classification Through Multiple Choice Questions

Emmett D. Goodman, Dipika Khullar, Negin Sokhandan, Sujitha Martin, Yash Shah, Generative AI Innovation Center

ABSTRACT

Visual Question Answering (VQA) models have shown an impressive potential in allowing humans to learn about images using natural language. One promising application of such models is for image classification. Through a simple multiple choice language prompt (i.e. “Question: Is this A) a cat or B) a dog. Answer: ”) a VQA model can operate as a zero-shot image classifier, producing a classification label (i.e. “B) a dog.”). Compared to typical image encoders, VQA models offer an advantage: VQA-produced image embeddings can be infused with the most relevant visual information through tailored language prompts. Nevertheless, for most tasks, zero-shot VQA performance is lacking, either because of unfamiliar category names, or dissimilar pre-training data and test data distributions. We propose a simple method to boost VQA performance for image classification using only a handful of labeled examples and a multiple-choice question. This few-shot method is training-free and maintains the dynamic and flexible advantages of the VQA model. Rather than relying on the final language output, our approach uses multiple-choice questions to extract prompt-specific latent representations, which are enriched with relevant visual information. These representations are combined to create a final overall image embedding, which is decoded via reference to latent class prototypes constructed from the few labeled examples. We demonstrate this method outperforms both pure visual encoders and zero-shot VQA baselines to achieve impressive performance on common few-shot tasks including MiniImageNet, Caltech-UCSD Birds, and CIFAR-100. Finally, we show our approach does particularly well in settings with numerous diverse visual attributes such as the fabric, article-style, texture, and view of different articles of clothing, where other few-shot approaches struggle, as we can tailor our image representations only on the semantic features of interest. Code will be made publicly available.

KEYWORDS

Visual Question Answering, Few-Shot Classification, Prompt Engineering.


Frequency Spectrum of the Hydraulic Pressures Around the Rock Blocks in a Parallel Flow Dam Spillway

Vineeth Reddy Karnati, Ali Saeidi, Département des sciences appliquées, Université du Québec à Chicoutimi, Chicoutimi, Québec, Canada

ABSTRACT

Hydraulic rock mass erosion in dam spillways refers to gradual ejection of intact rock mass blocks with the flow of water due to its erosive forces. This erosion can lead to issues related to the structural integrity of hydro-power dams besides reducing their operational efficiency. This erosion process is dependent on the hydraulic pressures around the blocks especially on the top and bottom of the block. The pressures within the joints of the rock mass are influenced by the natural frequency of the joint under resonance conditions favouring the erosion process, which necessitates the study of frequency spectrum of the hydraulic pressures around the block. The frequency spectrum of these pressures is studied by carrying out several pilot plant dam spillway model tests and collecting the hydraulic pressure data. The frequency spectrum revealed the possibility of occurrence of resonance conditions within the fracture network.

KEYWORDS

Hydraulic Rock Mass Erosion, Fast Fourier Transform, Frequency Spectrum, Dam Spillway & Pilot Plant Spillway Model Tests.


Fuzzy Logic and Neural Networks for Disease Detection and Simulation in Matlab

Elvir Čajić1, Irma Ibrišimović2, Alma Šehanović3 , Damir Bajrić4, Julija Ščekić5, 1Elementary school „Prokosovići“ Prokosovići, Lukavac 75300, Bosnia and Herzegovina, 2Faculty of Science, University of Tuzla, Tuzla 75000, Bosnia and Herzegovina, 3,4High school „Meša Selimović", Tuzla 75000, Bosnia and Herzegovina, 5Faculty of Agriculture, University of Belgrade, Belgrade 11080, Serbia

ABSTRACT

This paper investigates the integration of fuzzy logic and neural networks for disease detection using the Matlab environment. Disease detection is key in medical diagnostics, and the combination of fuzzy logic and neural networks offers an advanced methodology for the analysis and interpretation of medical data. Fuzzy logic is used for modeling and resolving uncertainty in diagnostic processes, while neural networks are applied for in-depth processing and analysis of images relevant to disease diagnosis. This paper demonstrates the development and implementation of a simulation system in Matlab, using real medical data and images of organs for the purpose of detecting specific diseases, with a special focus on the application in the diagnosis of kidney diseases. Combining fuzzy logic and neural networks, simulation offers precision and robustness in the diagnosis process, opening the door to advanced medical information systems.

KEYWORDS

Fuzzy logic, Neural networks, Disease detection, Matlab simulation, Medical images, Diagnostics, Uncertainty, Modeling.


Pattern-based Refinement for Event-b Machines

Elie Fares, Jean-Paul Bodeveix, and Mamoun FilaliHigher Colleges of Technology and Université de Toulouse IRIT, Université de ToulouseIRIT UPS,Université de Toulouse IRIT CNRS

ABSTRACT

Today several high-level requirements languages are present on the market. Most of themhave demonstrated great robustness in capturing, modeling, and verifying industrial requirements. How- ever, developing these systems is not a cakewalk, especially in the context of big industrial projects [5]. In this paper, we focus on the preliminary steps of the development of safety-critical systems. We in- vestigate how patterns could be used to generate refinements automatically in the context of an Event-B development. The patterns proposed in this paper either impose constraints on the model through weakest precondition calculus, superpose counters or introduce de-synchronization mechanisms using observers. Moreover, we revisit a classic case study using our proposed patterns.

KEYWORDS

High-level requirements, Refinements, Event-B, Patterns based approach, Weakestpreconditions calculus.


Detecting Syn Flood Attack Using Csa-nets

Mohammed Alahmadi,Department of Software Engineering, College of Computer Science and Engineering, University of Jeddah, Jeddah 21493, Saudi Arabia

ABSTRACT

Distributed Denial of Service (DDOS) attacks pose a persistent threat to network security by interrupting server functions. One common DDOS attack is the SYN-flood attack, which targets the three-way handshake process of TCP protocol. This technique overwhelms a system by sending a vast number of SYN messages, thereby exhausting its computational and communicative resources. A visual simulation for this scenario offers deeper insights into the intricacies of the TCP-SYN-flood attack. This paper presents a novel approach that combines TCP protocol anomaly detection with visual analysis through Communication Structured Acyclic nets (CSA-nets). The strategy provides a clear visualisation of attack behaviours, granting a deeper understanding of DDOS patterns and their underlying causes. A new concept of TCCSA-nets is introduced. TCCSA-nets allow elaborating on the system’s performance and emphasizing the system’s operations in real-time. Such a feature allows to classify the message whose time exceeds a predefined allowed time as abnormal; otherwise messages are treated as normal communication. Through tests on public datasets, the results show that the proposed approach is effective in detecting SYN-flood attacks.

KEYWORDS

Formal model, modelling, visualising, analysing, cybersecurity, protocols, threshold detection.


Spam Detection on Social Media Networks Based on Contextual Word Embedding and Recurrent Neural Network With Non-contextual Embedders

Sawsan Alshattnawi and Hebah Eid Almomani ,Computer Science Department .Faculty of Computer & Information Technology, Yarmouk University, Jordan

ABSTRACT

One of cyber-security concerns is spam identification, as the social media networks grow recently, spamming becomes more prevalent and harmful. Many solutions for spam detection problems in social media networks have been proposed using machine learning and deep learning approaches. However, this field still needs extensive and upto-date studies to avoid the gaps in the past and find more accurate results to detect the spam messages. This paper proposes the use of textual word embeddings in spam detection. We used Recurrent Neutral Network (RNN) models stacked over non-contextual word embededings. The used RNN models are Long Short Term Memory (LSTM) and Gated Recurrent Unit (GRU) stacked with Glove and Word2vec pre-trained word embeddings. In addition, we used the contextualized word embeddings which are Bidirectional Encoder Representations from Transformers (BERT) and Embeddings from Language Model (ELMO). The results, which computed over two datasets, show that contextualized word embeddings can provide promising results without being stacked in other deep learning models. ELMO achieved the best results among contextual embedders, with 90% on Twitter data and 94% on YouTube data.

KEYWORDS

Cyber-Security, Spam Detection, RNN, Word Embededings, BERT, EMLO.


Optimization of Network Performance in Complex Environment With SDN

Munienge Mbodila1 and Omobayo. A. Esan2 ,1,2Department of Information Technology Systems, Walter Sisulu University, Eastern Cape, South Africa

ABSTRACT

Many organizations’ networks today depend on Internet Protocol (IP) addresses to identify and locate servers. This approach gives satisfactory performance for a static network where each physical device is recognizable by an IP address but is extremely laborious for large networks. In a complex large network utilizing a single controller might be challenging as a single controller can lead to a single point of failure. Furthermore, if the number of switches attached to a controller increases, the traffic can overwhelm the performance of the controller, consequently, hindering scalability in a Software Defined Network (SDN) environment. To address these challenges, the multiple controllers that utilize the combination of k-center and k-means are presented in this study to minimize propagation latency between the switches and the controller. The simulations were conducted using mininet and the results show that the observed number and location of SDN controllers minimize inter-controller latency and improve controller throughput which ensures the optimization of Network Performance.

KEYWORDS

Software-defined networking, controller, latency, throughput.


menu
Reach Us

emailseapp@csea2023.org


emailjseconf@yahoo.com

close