Repository logo
 

Search Results

Now showing 1 - 10 of 51
  • Unsupervised Domain Adaptation Using Generative Adversarial Networks for Semantic Segmentation of Aerial Images
    Publication . Benjdira, Bilel; Bazi, Yakoub; Koubaa, Anis; Ouni, Kais
    Segmenting aerial images is of great potential in surveillance and scene understanding of urban areas. It provides a mean for automatic reporting of the different events that happen in inhabited areas. This remarkably promotes public safety and traffic management applications. After the wide adoption of convolutional neural networks methods, the accuracy of semantic segmentation algorithms could easily surpass 80% if a robust dataset is provided. Despite this success, the deployment of a pretrained segmentation model to survey a new city that is not included in the training set significantly decreases accuracy. This is due to the domain shift between the source dataset on which the model is trained and the new target domain of the new city images. In this paper, we address this issue and consider the challenge of domain adaptation in semantic segmentation of aerial images. We designed an algorithm that reduces the domain shift impact using generative adversarial networks (GANs). In the experiments, we tested the proposed methodology on the International Society for Photogrammetry and Remote Sensing (ISPRS) semantic segmentation dataset and found that our method improves overall accuracy from 35% to 52% when passing from the Potsdam domain (considered as source domain) to the Vaihingen domain (considered as target domain). In addition, the method allows efficiently recovering the inverted classes due to sensor variation. In particular, it improves the average segmentation accuracy of the inverted classes due to sensor variation from 14% to 61%.
  • A Cloud Based Disaster Management System
    Publication . Cheikhrouhou, Omar; Koubaa, Anis; Zarrad, Anis
    The combination of wireless sensor networks (WSNs) and 3D virtual environments opens a new paradigm for their use in natural disaster management applications. It is important to have a realistic virtual environment based on datasets received from WSNs to prepare a backup rescue scenario with an acceptable response time. This paper describes a complete cloud-based system that collects data from wireless sensor nodes deployed in real environments and then builds a 3D environment in near real-time to reflect the incident detected by sensors (fire, gas leaking, etc.). The system’s purpose is to be used as a training environment for a rescue team to develop various rescue plans before they are applied in real emergency situations. The proposed cloud architecture combines 3D data streaming and sensor data collection to build an efficient network infrastructure that meets the strict network latency requirements for 3D mobile disaster applications. As compared to other existing systems, the proposed system is truly complete. First, it collects data from sensor nodes and then transfers it using an enhanced Routing Protocol for Low-Power and Lossy Networks (RLP). A 3D modular visualizer with a dynamic game engine was also developed in the cloud for near-real time 3D rendering. This is an advantage for highly-complex rendering algorithms and less powerful devices. An Extensible Markup Language (XML) atomic action concept was used to inject 3D scene modifications into the game engine without stopping or restarting the engine. Finally, a multi-objective multiple traveling salesman problem (AHP-MTSP) algorithm is proposed to generate an efficient rescue plan by assigning robots and multiple unmanned aerial vehicles to disaster target locations, while minimizing a set of predefined objectives that depend on the situation. The results demonstrate that immediate feedback obtained from the reconstructed 3D environment can help to investigate what–if scenarios, allowing for the preparation of effective rescue plans with an appropriate management effort.
  • HTTU-Net: Hybrid Two Track U-Net for Automatic Brain Tumor Segmentation
    Publication . M. Aboelenein, Nagwa; Songhao, Piao; Koubaa, Anis; Noor, Alam; Afifi, Ahmed
    Brain cancer is one of the most dominant causes of cancer death; the best way to diagnose and treat brain tumors is to screen early. Magnetic Resonance Imaging (MRI) is commonly used for brain tumor diagnosis; however, it is a challenging problem to achieve higher accuracy and performance, which is a vital problem in most of the previously presented automated medical diagnosis. In this paper, we propose a Hybrid Two-Track U-Net(HTTU-Net) architecture for brain tumor segmentation. This architecture leverages the use of Leaky Relu activation and batch normalization. It includes two tracks; each one has a different number of layers and utilizes a different kernel size. Then, we merge these two tracks to generate the final segmentation. We use the focal loss, and generalized Dice (GDL), loss functions to address the problem of class imbalance. The proposed segmentation method was evaluated on the BraTS’2018 datasets and obtained a mean Dice similarity coefficient of 0.865 for the whole tumor region, 0.808 for the core region and 0.745 for the enhancement region and a median Dice similarity coefficient of 0.883, 0.895, and 0.815 for the whole tumor, core and enhancing region, respectively. The proposed HTTU-Net architecture is sufficient for the segmentation of brain tumors and achieves highly accurate results. Other quantitative and qualitative evaluations are discussed, along with the paper. It confirms that our results are very comparable expert human-level performance and could help experts to decrease the time of diagnostic.
  • Dynamic Multi-Objective Auction-Based (DYMO-Auction) Task Allocation
    Publication . Baroudi, Uthman; Alshaboti, Mohammad; Koubaa, Anis; Trigui, Sahar
    In this paper, we address the problem of online dynamic multi-robot task allocation (MRTA) problem. In the existing literature, several works investigated this problem as a multi-objective optimization (MOO) problem and proposed different approaches to solve it including heuristic methods. Existing works attempted to find Pareto-optimal solutions to the MOO problem. However, to the best of authors’ knowledge, none of the existing works used the task quality as an objective to optimize. In this paper, we address this gap, and we propose a new method, distributed multi-objective task allocation approach (DYMO-Auction), that considers tasks’ quality requirement, along with travel distance and load balancing. A robot is capable of performing the same task with different levels of perfection, and a task needs to be performed with a level of perfection. We call this level of perfection quality level. We designed a new utility function to consider four competing metrics, namely the cost, energy, distance, type of tasks. It assigns the tasks dynamically as they emerge without global information and selects the auctioneer randomly for each new task to avoid the single point of failure. Extensive simulation experiments using a 3D Webots simulator are conducted to evaluate the performance of the proposed DYMO-Auction. DYMO-Auction is compared with the sequential single-item approach (SSI), which requires global information and offline calculations, and with Fuzzy Logic Multiple Traveling Salesman Problem (FL-MTSP) approach. The results demonstrate a proper matching with SSI in terms of quality satisfaction and load balancing. However, DYMO-Auction demands 20% more travel distance. We experimented with DYMO-Auction using real Turtlebot2 robots. The results of simulation experiments and prototype experiments follow the same trend. This demonstrates the usefulness and practicality of the proposed method in real-world scenarios.
  • DeepBrain: Experimental Evaluation of Cloud-Based Computation Offloading and Edge Computing in the Internet-of-Drones for Deep Learning Applications
    Publication . Koubaa, Anis; Ammar, Adel; Alahda, Mahmoud; Kanhouc, Anas; Azar, Ahmad Taher
    Unmanned Aerial Vehicles (UAVs) have been very effective in collecting aerial images data for various Internet-of-Things (IoT)/smart cities applications such as search and rescue, surveillance, vehicle detection, counting, intelligent transportation systems, to name a few. However, the real-time processing of collected data on edge in the context of the Internet-of-Drones remains an open challenge because UAVs have limited energy capabilities, while computer vision techniquesconsume excessive energy and require abundant resources. This fact is even more critical when deep learning algorithms, such as convolutional neural networks (CNNs), are used for classification and detection. In this paper, we first propose a system architecture of computation offloading for Internet-connected drones. Then, we conduct a comprehensive experimental study to evaluate the performance in terms of energy, bandwidth, and delay of the cloud computation offloading approach versus the edge computing approach of deep learning applications in the context of UAVs. In particular, we investigate the tradeoff between the communication cost and the computation of the two candidate approaches experimentally. The main results demonstrate that the computation offloading approach allows us to provide much higher throughput (i.e., frames per second) as compared to the edge computing approach, despite the larger communication delays.
  • A Drone Secure Handover Architecture validated in a Software in the Loop Environment
    Publication . Vasconcelos Filho, Ênio; Gomes, Filipe; Monteiro, Stéphane; Penna, Sergio; Koubaa, Anis; Tovar, Eduardo; Severino, Ricardo
    The flight and control capabilities of uncrewed aerial vehicles (UAVs) have increased significantly with recent research for civilian and commercial applications. As a result, these devices are becoming capable of flying ever greater distances, accomplishing flights beyond line of sight (BVLOS). However, given the need for safety guarantees, these flights are increasingly subject to regulations. Handover operations between controllers and the security of the exchanged data are a challenge for implementing these devices in various applications. This paper presents a secure handover architecture between control stations, using a Software in the Loop (SIL) model to validate the adopted strategies and mitigate the time between simulation and real systems implementations. This architecture is developed in two separate modules that perform the security and handover processes. Finally, we validate the proposed architecture with several drone flights on a virtual testbed.
  • APEnergy: Application Profile-Based Energy-Efficient Framework for SaaS Clouds
    Publication . Qureshi, Basit; Koubaa, Anis
    In the past decade, there has been a steady increase in the focus on green initiatives for data centers. Various energy efficiency measures have been proposed and adopted, however the optimal tradeoff between performance and energy efficiency of data centers is yet to be achieved. Addressing this issue, we present APEnergy, an Application Profile-based energy efficient framework for small to medium scale data centers. The proposed framework leverages information on the completed application with certain workloads in the data center to build profiles for workflows. The framework utilizes a novel scheduler to obtain a near-optimal mapping for placement of workflow tasks in the data center based on three criteria including CPU utilization, power cost and task completion time. We compare the performance of the proposed scheduler to similar RTC and HEFT schedulers. Extensive simulation studies are carried out to verify the scalability and efficiency of APEnergy framework. Results show that the proposed Scheduler is 2% and 14% more energy efficient than RTC and HEFT respectively.
  • QCOF: New RPL Extension for QoS and Congestion-Aware in Low Power and Lossy Network
    Publication . Ben Aissa, Yousra; Grichi, Hanen; Khalgui, Mohamed; Koubaa, Anis; Bachir, Abdelmalik
    Low power and lossy networks (LLNs) require a routing protocol under real-time and energy constraints, congestion aware and packet priority. Thus, Routing Protocol for Low power and lossy network (RPL) is recommended by Internet Engineering Task force (IETF) for LLN applications. In RPL, nodes select their optimal paths towards their preferred parents after meeting routing metrics that are injected in the objective function (OF). However, RPL did not impose any routing metric and left it open for implementation. In this paper, we propose a new RPL objective function which is based on the quality of service (QoS) and congestion-aware. In the case paths fail, we define new RPL control messages for enriching the network by adding more routing nodes. Extensive simulations show that QCOF achieves significant improvement in comparison with the existing objective functions, and appropriately satisfies real-time applications under QoS and network congestion.
  • Robot Path Planning and Cooperation
    Publication . Koubaa, Anis; Bennaceur, Hachemi; Chaari, Imen; Trigui, Sahar; Ammar, Adel; Sriti, Mohamed-Foued; Alajlan, Maram; Cheikhrouhou, Omar; Javed, Yasir
    This book presents extensive research on two main problems in robotics: the path planning problem and the multi-robot task allocation problem. It is the first book to provide a comprehensive solution for using these techniques in large-scale environments containing randomly scattered obstacles. The research conducted resulted in tangible results both in theory and in practice. For path planning, new algorithms for large-scale problems are devised and implemented and integrated into the Robot Operating System (ROS). The book also discusses the parallelism advantage of cloud computing techniques to solve the path planning problem, and, for multi-robot task allocation, it addresses the task assignment problem and the multiple traveling salesman problem for mobile robots applications. In addition, four new algorithms have been devised to investigate the cooperation issues with extensive simulations and comparative performance evaluation. The algorithms are implemented and simulated in MATLAB and Webots.
  • LSAR: Multi-UAV Collaboration for Search and Rescue Missions
    Publication . Alotaibi, Ebtehal Turki; Saleh Alqefari, Shahad; Koubaa, Anis
    In this paper, we consider the use of a team of multiple unmanned aerial vehicles (UAVs) to accomplish a search and rescue (SAR) mission in the minimum time possible while saving the maximum number of people. A novel technique for the SAR problem is proposed and referred to as the layered search and rescue (LSAR) algorithm. The novelty of LSAR involves simulating real disasters to distribute SAR tasks among UAVs. The performance of LSAR is compared, in terms of percentage of rescued survivors and rescue and execution times, with the max-sum, auction-based, and locust-inspired approaches for multi UAV task allocation (LIAM) and opportunistic task allocation (OTA) schemes. The simulation results show that the UAVs running the LSAR algorithm on average rescue approximately 74% of the survivors, which is 8% higher than the next best algorithm (LIAM). Moreover, this percentage increases with the number of UAVs, almost linearly with the least slope, which means more scalability and coverage is obtained in comparison to other algorithms. In addition, the empirical cumulative distribution function of LSAR results shows that the percentages of rescued survivors clustered around the [78% 100%] range under an exponential curve, meaning most results are above 50%. In comparison, all the other algorithms have almost equal distributions of their percentage of rescued survivor results. Furthermore, because the LSAR algorithm focuses on the center of the disaster, it nds more survivors and rescues them faster than the other algorithms, with an average of 55% 77%. Moreover, most registered times to rescue survivors by LSAR are bounded by a time of 04:50:02 with 95% con dence for a one-month mission time.