Repository logo
 

Search Results

Now showing 1 - 10 of 46
  • Multi-robot cooperative stereo for outdoor scenarios
    Publication . Dias, André; Almeida, José; Lima, Pedro; Silva, Eduardo
    In this paper, we propose a cooperative perception framework for multi-robot real-time 3D high dynamic target estimation in outdoor scenarios based on monocular camera available on each robot. The relative position and orientation between robots establishes a flexible and dynamic stereo baseline. Overlap views subject to geometric constraints emerged from the stereo formulation, which allowed us to obtain a decentralized cooperative perception layer. Epipolar constraints related to the global frame are applied both in image feature matching and to feature searching and detection optimization in the image processing of robots with low computational capabilities. In contrast to classic stereo, the proposed framework considers all sources of uncertainty (in localization, attitude and image detection from both robots) in the determination of the objects best 3D localization and its uncertainty. The proposed framework can be later integrated in a decentralized data fusion (DDF) multi-target tracking approach where it can contribute to reduce rumor propagation data association and track initialization issues. We demonstrate the advantages of this approach in real outdoor scenario. This is done by comparing a stereo rigid baseline standalone target tracking with the proposed multi-robot cooperative stereo between a micro aerial vehicle (MAV) and an autonomous ground vehicle (AGV).
  • Formation Control Driven by Cooperative Object Tracking
    Publication . Lima, Pedro; Ahmada, Aamir; Dias, André; Conceição, André; Moreira, António; Silva, Eduardo; Almeida, Luís; Oliveira, Luís; Nascimento, Tiago
    In this paper we introduce a formation control loop that maximizes the performance of the cooperative perception of a tracked target by a team of mobile robots, while maintaining the team in formation, with a dynamically adjustable geometry which is a function of the quality of the target perception by the team. In the formation control loop, the controller module is a distributed non-linear model predictive controller and the estimator module fuses local estimates of the target state, obtained by a particle filter at each robot. The two modules and their integration are described in detail, including a real-time database associated to a wireless communication protocol that facilitates the exchange of state data while reducing collisions among team members. Simulation and real robot results for indoor and outdoor teams of different robots are presented. The results highlight how our method successfully enables a team of homogeneous robots to minimize the total uncertainty of the tracked target cooperative estimate while complying with performance criteria such as keeping a pre-set distance between the teammates and the target, avoiding collisions with teammates and/or surrounding obstacles.
  • BoaVista – Sensor Dedicado de Visão Artificial Baseado em Hardware (Re)configurável
    Publication . Lima, Luís; Almeida, José; Martins, Alfredo; Silva, Eduardo
    Este artigo aborda o projecto de um sistema de visão dedicado para robótica móvel autónoma, que beneficia das capacidades de execução paralela do hardware reconfigurável, processando em “pipeline” as imagens provenientes de um sensor de imagem CMOS de alto desempenho em simultâneo com a aquisição das mesmas. Apresentamos um sistema com a capacidade de adquirir e processar imagens com resoluções de 640x480 a uma taxa de 60 fps, baixo custo e capaz de disponibilizar para o sistema central apenas a informação pretendida extraída da imagem. Este ponto, permite libertar os recursos computacionais do robot traduzindo-se em reduções de consumo significativas e consequente aumento da autonomia energética do mesmo.
  • Combining sparse and dense methods in 6D Visual Odometry
    Publication . Silva, Hugo Miguel; Silva, Eduardo; Bernardino, Alexandre
    Visual Odometry is one of the most powerful, yet challenging, means of estimating robot ego-motion. By grounding perception to the static features in the environment, vision is able, in principle, to prevent the estimation bias rather common in other sensory modalities such as inertial measurement units or wheel odometers. We present a novel approach to ego-motion estimation of a mobile robot by using a 6D Visual Odometry Probabilistic Approach. Our approach exploits the complementarity of dense optical flow methods and sparse feature based methods to achieve 6D estimation of vehicle motion. A dense probabilistic method is used to robustly estimate the epipolar geometry between two consecutive stereo pairs; a sparse feature stereo approach to estimate feature depth; and an Absolute Orientation method like the Procrustes to estimate the global scale factor. We tested our proposed method on a known dataset and compared our 6D Visual Odometry Probabilistic Approach without filtering techniques against a implementation that uses the well known 5-point RANSAC algorithm. Moreover, comparison with an Inertial Measurement Unit (RTK-GPS) is also performed, for providing a more detailed evaluation of the method against ground-truth information.
  • Master's in autonomous systems: an overview of the robotics curriculum and outcomes at ISEP, Portugal
    Publication . Silva, Eduardo; Almeida, José; Martins, Alfredo; Baptista, João Paulo; Neves, Betina Campos
    Robotics research in Portugal is increasing every year, but few students embrace it as one of their first choices for study. Until recently, job offers for engineers were plentiful, and those looking for a degree in science and technology would avoid areas considered to be demanding, like robotics. At the undergraduate level, robotics programs are still competing for a place in the classical engineering graduate curricula. Innovative and dynamic Master’s programs may offer the solution to this gap. The Master’s degree in autonomous systems at the Instituto Superior de Engenharia do Porto (ISEP), Porto, Portugal, was designed to provide a solid training in robotics and has been showing interesting results, mainly due to differences in course structure and the context in which students are welcomed to study and work
  • High-Accuracy Low-Cost RTK-GPS for an Unmanned Surface Vehicle
    Publication . Matias, B.; Oliveira, H.; Almeida, José; Dias, André; Ferreira, H.; Martins, Alfredo; Silva, Eduardo
    This work presents a low cost RTK-GPS system for localization of unmanned surface vehicles. The system is based on the use of standard low cost L1 band receivers and in the RTKlib open source software library. Mission scenarios with multiple robotic vehicles are addressed as the ones envisioned in the ICARUS search and rescue case where the possibility of having a moving RTK base on a large USV and multiple smaller vehicles acting as rovers in a local communication network allows for local relative localization with high quality. The approach is validated in operational conditions with results presented for moving base scenario. The system was implemented in the SWIFT USV with the ROAZ autonomous surface vehicle acting as a moving base. This setup allows for the performing of a missions in a wider range of environments and applications such as precise 3D environment modeling in contained areas and multiple robot operations.
  • Groundtruth system for underwater benchmarking
    Publication . Martins, Alfredo; Dias, André; Silva, Hugo Miguel; Almeida, José Miguel; Gonçalves, Pedro; Lopes, Flávio; Faria, André; Ribeiro, João Pedro; Silva, Eduardo
    In this paper a vision based groundtruth system for underwater applications is presented. The proposed system as an external validation perception and localization mechanism for underwater trials in the INESC TEC / ISEP underwater robotics test tank. It is comprised by a stereo camera pair with external synchronization and a image processing and data recording host computer. The cameras are disposed in a rigid baseline calibrated using scenario key points. Two target detection algorithms were tested and their results are discussed. One is based on template matching techniques allowing the tracking of arbitrary targets without particular markers and the other on color segmentation with the target vehicle equipped with light markers. Also an example trajectory of a small ROV motion in the task is also presented.
  • Real-Time Visual Ground-Truth System for Indoor Robotic Applications
    Publication . Dias, André; Almeida, José Miguel; Martins, Alfredo; Silva, Eduardo
    The robotics community is concerned with the ability to infer and compare the results from researchers in areas such as vision perception and multi-robot cooperative behavior. To accomplish that task, this paper proposes a real-time indoor visual ground truth system capable of providing accuracy with at least more magnitude than the precision of the algorithm to be evaluated. A multi-camera architecture is proposed under the ROS (Robot Operating System) framework to estimate the 3D position of objects and the implementation and results were contextualized to the Robocup Middle Size League scenario.
  • Structured Light System Calibration for Perception in Underwater Tanks
    Publication . Lopes, Flávio; Silva, Hugo; Almeida, José; Silva, Eduardo
    The process of visually exploring underwater environments is still a complex problem. Underwater vision systems require complementary means of sensor information to help overcome water disturbances. This work proposes the development of calibration methods for a structured light based system consisting on a camera and a laser with a line beam. Two different calibration procedures that require only two images from different viewpoints were developed and tested in dry and underwater environments. Results obtained show, an accurate calibration for the camera/projector pair with errors close to 1 mm even in the presence of a small stereos baseline.
  • Vision-Based Assisted Teleoperation for Inspection Tasks with a Small ROV
    Publication . Costa, Maria J.; Gonçalves, Pedro; Martins, Alfredo; Silva, Eduardo
    It is well-known that ROVs require human intervention to guarantee the success of their assignment, as well as the equipment safety. However, as its teleoperation is quite complex to perform, there is a need for assisted teleoperation. This study aims to take on this challenge by developing vision-based assisted teleoperation maneuvers, since a standard camera is present in any ROV. The proposed approach is a visual servoing solution, that allows the user to select between several standard image processing methods and is applied to a 3-DOF ROV. The most interesting characteristic of the presented system is the exclusive use of the camera data to improve the teleoperation of an underactuated ROV. It is demonstrated through the comparison and evaluation of standard implementations of different vision methods and the execution of simple maneuvers to acquire experimental results, that the teleoperation of a small ROV can be drastically improved without the need to install additional sensors.