Research

Our mission in the robotics domain.

Coverage, Exploration & Motion Planning

This area studies how single robots and teams of robots explore, cover and move through unknown or partially known environments in a safe and efficient way. It spans coverage path planning, environment decomposition, multi-robot task allocation and exploration under uncertainty, as well as global and local path planning, sampling-based and search-based methods, trajectory optimization and obstacle avoidance in static or dynamic settings. In our work, this typically means planning missions for teams of aerial, ground and underwater robots that must map unknown terrains, scan large agricultural or natural areas, or inspect infrastructure with limited time and battery. Core challenges include guaranteeing sufficient coverage with minimal overlap, respecting kinematic and dynamic constraints, operating with limited communication and energy, handling perception noise, and achieving real-time performance and safe interaction with other agents in cluttered or unstructured spaces.

Coverage, Exploration & Motion Planning image 0Coverage, Exploration & Motion Planning image 1
Coverage mission with the proposed platform. The objective of this mission is to completely cover a non-convex, complex-shaped polygon ROI, that includes two separate NFZs inside it (marked with light-red color),involving 10 UAVs, that all contribute equally to the scanning procedure.

Projects

AquaMon: Advanced QUAlity MOnitoring system of water in urbaN areas
TEXTaiLES: TEXTile digitisAtIon tooLs and mEthodS for cultural heritage
iDriving: Intelligent & Digital Roadway Infrastructure for Vehicles Integrated with Next-Gen Technologies
TRACE: Integration and Harmonization of Logistics Operations
PERIVALLON: Protecting the EuRopean terrItory from organised enVironmentAl crime through inteLLigent threat detectiON tools
TREEADS: A Holistic Fire Management Ecosystem for Prevention, Detection and Restoration of Environmental Disasters
ISOLA: Innovative & Integrated Security System on Board Covering the Life Cycle of a Passenger Ships Voyage
NESTOR: aN Enhanced pre-frontier intelligence picture to Safeguard The EurOpean boRders
ARESIBO: Augmented Reality Enriched Situation awareness for Border security
Cognitional Operations of micro Flying vehicles (COFLY)
VINO
ROBORDER: autonomous swarm of heterogeneous RObots for BORDER surveillance
RAWFIE: Road-, Air- and Water-based Future Internet Experimentation

Videos

Cooperative & Swarm Multi-Robot Systems

This area focuses on how multiple robots coordinate their behavior to achieve tasks more efficiently, robustly or at larger scale than a single robot. It spans multi-robot systems and swarm robotics, where large numbers of relatively simple robots follow decentralized interaction rules. In practice, we often study how heterogeneous teams of drones, ground vehicles and underwater robots can self-distribute, share information and allocate roles to perform missions such as wide-area monitoring, search or gas-plume tracking without relying on a single central controller. Key topics include distributed control, consensus, task and role allocation, formation control, information sharing and swarm-intelligence techniques, with main challenges involving scalability, fault tolerance, predictability of emergent behaviors, and operation under tight sensing and communication constraints.

Cooperative & Swarm Multi-Robot Systems image 0Cooperative & Swarm Multi-Robot Systems image 1

Projects

VICTORIOUS: Innovative Ai-Enhanced, Remotely Powered, Indirect Fire Observation System Utilizing Unmanned Vehicles Program
REACTION
CALLISTO: Copernicus Artificial Intelligence Services and data fusion with other distributed data sources and processing at the edge to support DIAS and HPC infrastructures
CREST: Fighting Crime and TerroRism with an IoT-enabled Autonomous Platform based on an Ecosystem of Advanced IntelligEnce, Operations, and InveStigation Technologies

Reinforcement Learning & Learning-Based Control

This area uses reinforcement learning and related learning-based control methods to automatically synthesize robot policies from interaction data instead of hand-designed controllers. Research includes both single-agent and multi-agent settings, where autonomous systems learn to make sequential decisions directly from experience. In our work, this typically means enabling mobile robots to explore unknown terrains, coordinating teams of agents, or managing large networks of devices and vehicles (for example, for charging or resource allocation) in a data-driven way that remains robust enough for real-world deployment. Major challenges include sample efficiency, ensuring safety and stability during learning and deployment, handling partial observability, and making learned policies interpretable and reliable in realistic conditions.

Reinforcement Learning & Learning-Based Control image 0Reinforcement Learning & Learning-Based Control image 1
ACRE performance insights on tle MountainCarContinuous OpenAI-gym environment. Each one of the four snapshots depicts (i) on the left-hand side the belief of Gaussian mixture model with respect to states space (inverse novelty) and (ii) on the right-hand side a superimposed visualization of the car's position (the dimmer the picture of the car, the less frequent to find it there) for the corresponding episode. a Initial episodes where the car moves randomly, exploring only a tiny subset of the whole state space. b Based on the reward feedback, the cat attempts to move closer to the flag; however, the environmental dynamics severely limit its performance. c Exploration now pays off, and the car has reached even the farthest states.d Based on these experiences, ACRE acquires and retains the optimal policy

Projects

ASPiDA
Hellenic Autonomous Vehicle (HAV)
NOPTILUS: autoNomous, self-Learning, OPTImal and compLete Underwater Systems
SWeFS: SENSOR WEB FIRE SHIELD
SFLY: Swarm of Micro Flying Robots