
Submissions for IRCEP’s Experimentation and Validation Challenges are open until December 2025. Guidelines and Rules that participants need to adhere to can be accessed here.

*Instructions for each of the challenges can be found below, as they open for submission.
Open Challenges
CODECO Data Generator #1
UPRC invites participants to take part in the CODECO Experimentation and Validation Challenge, focused on enhancing synthetic data generation for CRD (Common Resource Description) models. Participants will work with version 1 of the Synthetic Data Generator (SDG v1) in combination with the main version of the Data Generator. The objective is to develop an improved version that integrates the synthetic data generation capabilities of SDG v1 with the real CODECO CRDs provided by the main Data Generator. The goal is to produce synthetic data that accurately mirrors the structure and content of real CODECO CRDs. Participants are expected to submit a comprehensive set of results, including validation metrics and supporting evidence, demonstrating the fidelity and effectiveness of the enhanced synthetic data generation process.
CODECO Data Generator #2
In this challenge, participants are invited by UPRC to explore, install, and rigorously evaluate the CODECO Data Generator alongside other comparable workload data generation tools. The objective is to perform a detailed, hands-on comparison that highlights the strengths, weaknesses, and limitations of each tool. Through this process, participants will contribute valuable insights into the performance and capabilities of CODECO within real-world or simulated environments.
CODECO Secure Connectivity
As part of the CODECO projectโs commitment to advancing secure and efficient connectivity in next-generation communication systems, Universidad Carlos III de Madrid (UC3M) and Telefรณnica (TID) is leading a focused Validation and Experimentation Challenge. This effort aims to test and refine key components of the CODECO architecture through practical, scenario-based evaluation.
The challenge is structured around two main focal actions: the performance evaluation of the Secure Connectivity Component and the implementation of a Passive Monitoring Feature. These activities are designed to provide actionable insights into the robustness, effectiveness, and integration readiness of CODECO technologies.
CODECO Deployment and Scalability
This CODECO Experimentation and Validation Challenge, led by ICOM, is designed to test the deployment times and scalability of CODECO in real-world scenarios. The goal is to evaluate CODECOโs efficiency and performance across various use cases, focusing on deployment speed and its ability to scale effectively. This challenge offers participants an opportunity to validate CODECOโs capabilities while contributing valuable insights to optimize its deployment framework.
CODECO Energy Awareness Strategies
Focus on evaluating the effectiveness of proposed energy-aware scheduling strategies within the CODECO framework. Participants are tasked with comparing their solutions against two key baselines: i) the standard Kubernetes (K8s) scheduler and ii) KEIDS or similar energy-aware scheduling approaches. The challenge, provided by FORTISS, will specifically assess the performance of these strategies across several SMART goals.
CODECO Resilience strategies evaluation
Aims to evaluate the resilience strategies integrated into the CODECO framework. Participants will assess the effectiveness of their proposed solutions by comparing them against two key baselines: i) CODECO without resilience and ii) the vanilla Kubernetes (K8s) scheduler. The challenge, proposed by FORTISS, focuses on measuring resilience across several SMART goals.
Evaluate the CODECO SWM Scheduler
The CODECO Experimentation and Validation Challenge, provided by FORTISS, is designed to assess the performance of CODECOโs graph-based scheduling approach, specifically Seamless Workload Migration (SWM), by comparing it against two key baselines: the vanilla Kubernetes (K8s) scheduler and the Kubernetes network-aware scheduler.
Scalability testing of fine tune training for MARL agents
CODECO Experimentation and Validation Challenge on Scalability testing of fine tune training for MARL agents is an initiative designed to evaluate and enhance the performance of Multi-Agent Reinforcement Learning (MARL) systems across complex neighbourhood environments. In this challenge, provided by I2CAT, participants are tasked with developing and submitting solutions that trigger re-training or fine-tuning based on a custom performance metric.
Scalability of MARL model over increasing number of clusters
This CODECO Experimentation and Validation Challenge, provided by I2CAT, invites participants to evaluate the scalability and performance of Multi-Agent Reinforcement Learning (MARL) systems in increasingly complex neighbourhood environments. Specifically, the focus is on testing MARL performance as the number of clustersโand consequently, the number of communicating and collaborating agentsโgrows. The objective is to identify the scalability limits of the MARL model, assessing at what point increased coordination begins to create bottlenecks within the CODECO system.
[CODEF and Benchmarking] Dynamic Stress Testing
In this challenge, coordinated by ATH and UPRC, we aim to rigorously evaluate the resilience and adaptability of CODECO-managed deployments under adverse conditions. Participants will introduce artificial failures and resource bottlenecks using stress testing and chaos engineering toolsโsuch as killing pods, saturating CPUs, or throttling network bandwidth. The objective is to assess how effectively CODECO responds to these disruptions, reconfigures resources, and maintains optimal service levels under pressure.
[CODEF and benchmarking] Testing, Debugging and AI-powered assistant
In this challenge, spearheaded by ATH and UPRC, participants will explore how the integration of AI/ML can enhance the CODEF frameworkโs ability to perform testing, live debugging, and self-healing in real-time. The goal is to leverage intelligent automation to improve CODEFโs robustness, enabling faster detection and resolution of issues. By optimizing these capabilities, participants will work towards reducing the Mean Time to Repair (MTTR), ultimately increasing the overall reliability and efficiency of the system. This challenge invites participants to experiment with innovative AI/ML approaches that can transform how CODECO handles system failures and operational anomalies.
[CODEF and benchmarking] Security Mechanisms for CODEF
In this challenge, led by ATH and UPRC, participants will focus on proposing and implementing comprehensive end-to-end security mechanisms for the CODEF framework. The objective is to enhance the security posture of CODEF by designing and deploying robust solutions that protect data, services, and system communications across the entire lifecycle. Participants will evaluate and refine their security strategies to ensure that CODEF is resilient against potential vulnerabilities and attacks, while maintaining the integrity and availability of system operations. This challenge presents a unique opportunity to push the boundaries of security within dynamic, distributed environments.