Submissions for IRCEP’s Experimentation and Validation Challenges are open until December 2025. Guidelines and Rules that participants need to adhere to can be accessed here.

APPLY HERE

*Instructions for each of the challenges can be found below, as they open for submission.


Open Challenges

UPRC invites participants to take part in the CODECO Experimentation and Validation Challenge, focused on enhancing synthetic data generation for CRD (Common Resource Description) models. Participants will work with version 1 of the Synthetic Data Generator (SDG v1) in combination with the main version of the Data Generator. The objective is to develop an improved version that integrates the synthetic data generation capabilities of SDG v1 with the real CODECO CRDs provided by the main Data Generator. The goal is to produce synthetic data that accurately mirrors the structure and content of real CODECO CRDs. Participants are expected to submit a comprehensive set of results, including validation metrics and supporting evidence, demonstrating the fidelity and effectiveness of the enhanced synthetic data generation process.

Instructions to Gitlab page for CODECO Data Generator #1

In this challenge, participants are invited by UPRC to explore, install, and rigorously evaluate the CODECO Data Generator alongside other comparable workload data generation tools. The objective is to perform a detailed, hands-on comparison that highlights the strengths, weaknesses, and limitations of each tool. Through this process, participants will contribute valuable insights into the performance and capabilities of CODECO within real-world or simulated environments.

Instructions to Gitlab page for CODECO Data Generator #1

 

As part of the CODECO projectโ€™s commitment to advancing secure and efficient connectivity in next-generation communication systems, Universidad Carlos III de Madrid (UC3M) and Telefรณnica (TID) is leading a focused Validation and Experimentation Challenge. This effort aims to test and refine key components of the CODECO architecture through practical, scenario-based evaluation.

The challenge is structured around two main focal actions: the performance evaluation of the Secure Connectivity Component and the implementation of a Passive Monitoring Feature. These activities are designed to provide actionable insights into the robustness, effectiveness, and integration readiness of CODECO technologies.

This CODECO Experimentation and Validation Challenge, led by ICOM, is designed to test the deployment times and scalability of CODECO in real-world scenarios. The goal is to evaluate CODECOโ€™s efficiency and performance across various use cases, focusing on deployment speed and its ability to scale effectively. This challenge offers participants an opportunity to validate CODECOโ€™s capabilities while contributing valuable insights to optimize its deployment framework.

Focus on evaluating the effectiveness of proposed energy-aware scheduling strategies within the CODECO framework. Participants are tasked with comparing their solutions against two key baselines: i) the standard Kubernetes (K8s) scheduler and ii) KEIDS or similar energy-aware scheduling approaches. The challenge, provided by FORTISS, will specifically assess the performance of these strategies across several SMART goals.

Aims to evaluate the resilience strategies integrated into the CODECO framework. Participants will assess the effectiveness of their proposed solutions by comparing them against two key baselines: i) CODECO without resilience and ii) the vanilla Kubernetes (K8s) scheduler. The challenge, proposed by FORTISS, focuses on measuring resilience across several SMART goals.

The CODECO Experimentation and Validation Challenge, provided by FORTISS, is designed to assess the performance of CODECOโ€™s graph-based scheduling approach, specifically Seamless Workload Migration (SWM), by comparing it against two key baselines: the vanilla Kubernetes (K8s) scheduler and the Kubernetes network-aware scheduler.

 

CODECO Experimentation and Validation Challenge on Scalability testing of fine tune training for MARL agents is an initiative designed to evaluate and enhance the performance of Multi-Agent Reinforcement Learning (MARL) systems across complex neighbourhood environments. In this challenge, provided by I2CAT, participants are tasked with developing and submitting solutions that trigger re-training or fine-tuning based on a custom performance metric.

This CODECO Experimentation and Validation Challenge, provided by I2CAT, invites participants to evaluate the scalability and performance of Multi-Agent Reinforcement Learning (MARL) systems in increasingly complex neighbourhood environments. Specifically, the focus is on testing MARL performance as the number of clustersโ€”and consequently, the number of communicating and collaborating agentsโ€”grows. The objective is to identify the scalability limits of the MARL model, assessing at what point increased coordination begins to create bottlenecks within the CODECO system.

 

In this challenge, coordinated by ATH and UPRC, we aim to rigorously evaluate the resilience and adaptability of CODECO-managed deployments under adverse conditions. Participants will introduce artificial failures and resource bottlenecks using stress testing and chaos engineering toolsโ€”such as killing pods, saturating CPUs, or throttling network bandwidth. The objective is to assess how effectively CODECO responds to these disruptions, reconfigures resources, and maintains optimal service levels under pressure.

 

 

In this challenge, spearheaded by ATH and UPRC, participants will explore how the integration of AI/ML can enhance the CODEF frameworkโ€™s ability to perform testing, live debugging, and self-healing in real-time. The goal is to leverage intelligent automation to improve CODEFโ€™s robustness, enabling faster detection and resolution of issues. By optimizing these capabilities, participants will work towards reducing the Mean Time to Repair (MTTR), ultimately increasing the overall reliability and efficiency of the system. This challenge invites participants to experiment with innovative AI/ML approaches that can transform how CODECO handles system failures and operational anomalies.

In this challenge, led by ATH and UPRC, participants will focus on proposing and implementing comprehensive end-to-end security mechanisms for the CODEF framework. The objective is to enhance the security posture of CODEF by designing and deploying robust solutions that protect data, services, and system communications across the entire lifecycle. Participants will evaluate and refine their security strategies to ensure that CODEF is resilient against potential vulnerabilities and attacks, while maintaining the integrity and availability of system operations. This challenge presents a unique opportunity to push the boundaries of security within dynamic, distributed environments.